What Nokia have done here is to merge everything - terrain, surface image, buildings and trees - into the same model. They're still using the classic chunked level of detail approach, just with more complex models, which the graphics card handles with ease.
This requires more work on the server side to prepare the data, but once it is done it is really fast for the client. The main disadvantage is that the data ends up being very static - you can't move objects around, for example.
P.S. I'm currently working on open source WebGL globes like OpenWebGlobe (www.openwebglobe.org) and WebGLEarth (www.webglearth.org). If you're interested in this sort of thing, I recommend reading www.virtualglobebook.com .
A little more here: http://news.ycombinator.com/item?id=3429641
In short, your post is mainly uninformative.
>> Normally, these are sent to the client separately and merged in the graphics card
It means nothing. You moreover can't really know what the batching and draw calls scheme is in google earth nor in this nokia 3D maps.
>> Special routines are used to draw trees (e.g. billboards).
Mmmh. Ok ok. 3d applications are complicated, there is special routines for a lot of things btw...
>> What Nokia have done here is to merge everything - terrain, surface image, buildings and trees - into the same model.
I agree with that.. vulgarly. It does not mean there is one mesh and it isn't. It can be confusing.
What you mean is there is one skin.
>> They're still using the classic chunked level of detail approach, just with more complex models, which the graphics card handles with ease.
You don't know what the chunks and lod algorithm is and i guess it might be very innovative. Or not. Well but you are talking about the global chunked lod approach so you re right, but it can be very confusing. The LOD algorithm is probably very innovative.
>> This requires more work on the server side to prepare the data, but once it is done it is really fast for the client. The main disadvantage is that the data ends up being very static - you can't move objects around, for example.
This is just false. Please don't take it bad, people may be deceived by that and this is as false as possible. There is actually no benefit to explain why but it could looks like "there isn't any more server side preparation because the mesh is not construct on the fly, this is not faster in the client - you can't say it - it depends highly on the draw calls scheme, the vertex complexity, the textures fetches etc. etc. and the data are static you are right but yeah it is highly doubtful - 3D programmers are smart, take the example of moving BSPs in the quake engine, who could have say that BSPs can move...".
Anyway, thank you very much for the links and references, they are pretty interresting.
edit: please note that I'm trying to do all the efforts to make this post as constructive as possible.
Install WebGL inspector http://benvanik.github.com/WebGL-Inspector/ and you can see exactly what batching and draw calls are used in Nokia 3D maps :-)
Disclaimer: I haven't looked at this in detail, my statements about the grouping of single LOD objects was based on watching how the image changed as data was progressively loaded.
>> these are sent to the client separately
in different socket packets? in different ports? in different 'gameobjects'? in different 'line of caches'? in different 'structs'/'classes' or in different meshes? sorry but in this case, meshes and classes are not 'send-able' into a network...
>> merged in the graphics card
sorry but graphics cards have no "merge" function at all - I mean, it has no sense again.
and so on...
The client is the WebGL application running in the user's browser. It requests the appropriate data from the Nokia's server for the area that the user is looking at. This is what I mean by "being sent to the client".
By "merged in the graphics card" I mean that the final image is composed in the graphics card.
Have a look at vterrain.org to get an idea of how these things work.
You were going to deceive people with an aggregate of technical non-senses and obviousnesses mixed with some good links.
I knew that your initial post was as empty as what you are saying now: the server is sending data (and the right one bob!) to the client, then the graphics card is composing the images (Dude!) and the cpu is executing some special instructions.
It's a fact, people penalized me. I understand, (ultra)positive attitude is prefered over kindof technical and scientific honesty and demand. Even my bug report has been down-voted a lot =) Bugs are negative gnaaaa :>
Why not. I guess I have to apologize then.
Technical corrections can add a lot of value, but that kind of attitude overshadows any useful value you might have added to the discussion, and it makes it look like you don't intend to add any value at all. Effectively, it made your entire post look like a verbose "Nuh-uh!".
[Edit] Earlier LAST year, this year's only a few days old ;-)
WebGL is really impressive technology. The combination of a widely-deployed widely-used language (Javascipt) with high performance graphics (WebGL) make for a surprisingly capable platform for cross-platform game development. Once WebGL arrives on mobile and a full screen gets added on the desktop, I think it will become very, very popular.
Of course, the graphics capabilities aren't ask good as full-power OpenGL or Direct3D, but they're plenty good enough for a lot of applications.
This is why I think that Microsoft will be forced to support it: they'll have a hard time convincing the public to buy into Microsoft's platform if the public can't play their favourite games on it.
In the meantime, Chrome Frame provides WebGL inside Internet Explorer and you don't even need admin rights to install it :-)
Instead, MS is pushing performance improvements and hardware acceleration for Canvas and SVG. This is NVIDIA, but to give an example of the possibilities:
IMO, focusing on these isn't a bad thing, because these 2D technologies are substantially easier to use (e.g., SVG is declarative and integrates with CSS). Though, WebGL is obviously more expressive.
We may also see some WebGL-derived technologies make their way back into CSS + SVG. Similar to SVG filters for CSS:
Proposed GLSL shaders for CSS:
I would love to see shaders on CSS, but GLSL is such an ugly layer to add on top of a fairly nice design. Notice how the SVG filters are so much simpler to specify than the GLSL-on-CSS proposal.
I would much rather that Adobe designed a more restricted, declarative little language which would easily compile to GLSL, than bolt an almost-turing-complete C variant on top of CSS which is hard to reason about, hard to guarantee safety (most of the webgl-crashes-video-drivers issues have still not been solved, aside from the hamfisted "we will block webgl if we see this set of drivers" solution), and hard to interoperate.
I wouldn't even be surprised if MS adopts Webkit.
For MS to leverage IE 'dominance' would be a losing game. They no longer have control over the web. I have no doubt that MS will attempt to control the web with W8 but I think they will lose and do so quickly.
But – as is already obvious – dominance is indeed no longer something they have, will realistically achieve ever again or are able to leverage. If they want to have any say at all when it comes to the web’s future they have to play the standards game. They have to cooperate.
Microsoft is keenly aware of that (though maybe not entirely comfortable), as is evident from the direction they took with IE.
The real problem lies not inside the engine but inside Microsoft themselves. Specifically, within the .NET group. I know everyone on Hacker News loves Ruby, so I'll use that as an example. Microsoft wanted the dynamic language stylings that Ruby offered, so they spent 3 years developing IronRuby that ran on the .NET CLR. Then they suddenly dropped it without warning. Why? Because they had extracted everything they wanted from it. Keeping the technology up to date would not give them anything more than what they already had. Microsoft benefited from it, and when they no longer did, they dropped it. Everything that happens inside Microsoft's core is to strengthen their sellers: Windows and Office. If Windows or Office needs a new technology, they will take it, use it, and .NET-ify it until it becomes proprietary.
If they were to swap Trident for Webkit, it would be the same thing. IE11 built on Webkit for a few years, their development staff would learn from it, and the next release would see Trident 7 (IE12) back in form. Microsoft takes with only nominal giving because that's great for business. They can learn from outside technologies, then use that knowledge to lock people in tighter with better tech.
It's been a while since we've seen Microsoft in true form, pioneering and leveraging their weight to shape the market for their benefit. What we have right now is Microsoft in damage control mode. Moving to Webkit would be more of that, strengthening Trident by sucking the essence out of Webkit or Gecko, directing the flow of HTML5 (and pushing for MSHTML6 afterwards) would be the return of the powerhouse. It'll be interesting to see where things go, but even as someone who sees Microsoft as the best tool for the job in some certain situations, I would't place any bets on Microsoft being the dominant force on the web... ever. Luckily (for them), desktops aren't going anywhere anytime soon.
edit - I should add that, to your point (and mine), Microsoft already does use Webkit where it is advantageous for them: Mac OSX. Instead of continuing development on IE for OSX, they switched Office to Webkit for the Mac. I'd have to believe Trident would have suffered without that move (circa Office 2011).
Nokia scores a point with me here, if they keep delivering things like this I may even consider buying one of their phones one day.
The LIDAR data is also used in the Nokia City Scene app http://www.youtube.com/watch?v=_MxnUAVhdnU Worth noticing is that you can click on every building i.e. 3D information is combined with regular streetview data.
Australian company Nearmap started with exactly the same goals and have a similar product (custom aerial photography system with automated processing), but they don't seem to have figured the 3D photogrammetry part out yet.
Another impressive spot is the top of the Stratosphere Tower in Las Vegas -- it manages to capture the spike at the top fairly well. It'd be interesting to know how much hand-editing they did for sites of interest like that, and how they represent hand-edits in a way that can be re-applied when new lidar datasets come in.
When zoomed into an area for which there is 3D building coverage, it feels almost game-like. And I say that from a vantage point of some relevance. :)
The only real problem comes with pushing pixels around, as the other reply mentioned. I can watch QT video fine, but if I go to Vimeo or YouTube, I can't really get a good playback out of Flash. Generally Flash is bad on this machine. I'm a long time Mac user so I'm not too unused to this, but it seems a little worse than on an iMac or something like that.
I think if you use an IDE for dev work you should check it out at an Apple Store or something to make sure the resolution works for you. That's really the only thing I'd consider. My 11'' is really just too small to me, having been spoiled by dual 30''s on my desktop. If you use a text editor or vim/emacs it'll probably be fine, but IntelliJ or Eclipse or whatever just have too many windows to manage in the space, in my opinion.
The simple deciding factor is a SSD in the laptop. It's hard to overstate how much of a performance difference these make in compiling, localhost-served webpages, etc.
Stuff like this does look genuinely awesome, but Google Maps provides a whole different set of functionality. Searching, routing, etc.
For actual Nokia Maps (with search and routing) go to http://maps.nokia.com/
I tried zooming over my workplace in Darwin, AU and the building we work in isn't even there.. (its roughly 5years or so old)
I imagine there is probably a mash up of old/new map data in there depending on the population of a given place..
On latest Chrome: There was a WebGL compatibility problem. Please check system settings.
When you see run-the-business type web apps being written in non standard technology, then you can complain. When you see a neat toy being written in non standard technology, take it for what it is.
Is it a demo of proprietary technology that happens to be baked into a couple of specific web browsers and video card drivers, or is it a demo of what's possible using a (new, emerging) set of "web" standards?
Because "WebGL" certainly sounds like the name of a standard to me, and very few people expect a "web" demo to care about which brand of video card they have installed.
And since you brought it up, "HTML5 is not standardized" is a little disingenuous. Regardless of its ratification state, companies claim and market HTML5-ness precisely to signal their commitment to open standards as opposed to proprietary technology. Or maybe it's come to mean "anything that isn't Flash." In any case, the implicit promise there is that users will enjoy app functionality with minimal-to-zero worries about client-side configuration or component choice.
WebGL has a 1.0 specification, but still is not a standard as defined by the W3C/Web Hypertext Application Technology Working Group (WHATWG). Specification is one of the steps towards reaching a standard, so WebGL and HTML5 are well on their way but not there yet. Standards usually don't care what name they're referred to as (much like 4G-advertised mobile service that doesn't actually reach 100mbps/1gbps like the standard dictates).
At this point (much like with the aforementioned 4G) HTML5 means about as much as "Web 2.0" does. It's a set of competing implementations with many cross-platform features that are almost guaranteed to make it into the final standard, and a few vendor-specific implementations that are hoping to make it (if they prove their worth). Your assertion of 'Flash-like content that isn't implemented in Flash' (to paraphrase) is quite accurate in current implementations.
To sum it up, the core HTML5 that companies actually market towards is all but standard (offline storage, AJAX-like content control, Canvas, etc). The really cool things that make the front page of Hacker News and Reddit and require you to be running the beta Chrome or nightly Firefox are generally things that the vendor is hoping will make the standard. Marketing is a powerful thing, but not always accurate.
1. User visits website with late-model hardware/OS.
2. Website says "this site requires browser foo."
3. User installs browser foo and reloads website.
4. Website says "error - check system configuration."
A technology demo with highly-specific client requirements, especially on the web, especially when the demo plays the look-mom-no-plugins card, should try to enumerate the actual requirements. In this case, the requirement that after installing latest Chrome, the fool at the keyboard navigate to chrome:flags and hit the big "turn WebGL on" toggle.
I understand why these types of doc omissions happen, but it's really a pretty serious bug. Every user that hits 1-4 above is a user who is actively dissuaded from caring about the technology that the rest of the site was designed (at non-trivial expense) to promote.