Hacker News new | comments | show | ask | jobs | submit login
Nokia Maps 3D (WebGL) (nokia.com)
271 points by jasondavies 2030 days ago | hide | past | web | 90 comments | favorite



This is a really impressive demo. Most virtual globes (e.g. Google Earth) separate the terrain, surface image and building data. Normally, these are sent to the client separately and merged in the graphics card: the surface image is texture mapped onto the terrain, and then the building data is drawn separately on top. Special routines are used to draw trees (e.g. billboards).

What Nokia have done here is to merge everything - terrain, surface image, buildings and trees - into the same model. They're still using the classic chunked level of detail approach, just with more complex models, which the graphics card handles with ease.

This requires more work on the server side to prepare the data, but once it is done it is really fast for the client. The main disadvantage is that the data ends up being very static - you can't move objects around, for example.

P.S. I'm currently working on open source WebGL globes like OpenWebGlobe (www.openwebglobe.org) and WebGLEarth (www.webglearth.org). If you're interested in this sort of thing, I recommend reading www.virtualglobebook.com .


What? It's not just merging the different datasets into models, it's an complete accurate 3d model of terrain from C3 Technologies (now owned by Apple, btw), they take thousands of low-altitude photos and do a photosynth-esque reconstruction.


Lot of webGL these days. As you are directly working on this technology, I would like to know how the future of it is going to be?


I think the future for WebGL is very bright, especially as it becomes more widely available on mobile, and a full screen mode with mouse capture gets added on the desktop (critical for games).

A little more here: http://news.ycombinator.com/item?id=3429641


Mmmh sorry but I have to disagree with you. I can't let you tell this without some corrections, non-expert people could take it as it is.

In short, your post is mainly uninformative.

>> Normally, these are sent to the client separately and merged in the graphics card

It means nothing. You moreover can't really know what the batching and draw calls scheme is in google earth nor in this nokia 3D maps.

>> Special routines are used to draw trees (e.g. billboards).

Mmmh. Ok ok. 3d applications are complicated, there is special routines for a lot of things btw...

>> What Nokia have done here is to merge everything - terrain, surface image, buildings and trees - into the same model.

I agree with that.. vulgarly. It does not mean there is one mesh and it isn't. It can be confusing. What you mean is there is one skin.

>> They're still using the classic chunked level of detail approach, just with more complex models, which the graphics card handles with ease.

You don't know what the chunks and lod algorithm is and i guess it might be very innovative. Or not. Well but you are talking about the global chunked lod approach so you re right, but it can be very confusing. The LOD algorithm is probably very innovative.

>> This requires more work on the server side to prepare the data, but once it is done it is really fast for the client. The main disadvantage is that the data ends up being very static - you can't move objects around, for example.

This is just false. Please don't take it bad, people may be deceived by that and this is as false as possible. There is actually no benefit to explain why but it could looks like "there isn't any more server side preparation because the mesh is not construct on the fly, this is not faster in the client - you can't say it - it depends highly on the draw calls scheme, the vertex complexity, the textures fetches etc. etc. and the data are static you are right but yeah it is highly doubtful - 3D programmers are smart, take the example of moving BSPs in the quake engine, who could have say that BSPs can move...".

Anyway, thank you very much for the links and references, they are pretty interresting.

edit: please note that I'm trying to do all the efforts to make this post as constructive as possible.


>>> Normally, these are sent to the client separately and merged in the graphics card > It means nothing. You moreover can't really know what the batching and draw calls scheme is in google earth nor in this nokia 3D maps.

Install WebGL inspector http://benvanik.github.com/WebGL-Inspector/ and you can see exactly what batching and draw calls are used in Nokia 3D maps :-)

Disclaimer: I haven't looked at this in detail, my statements about the grouping of single LOD objects was based on watching how the image changed as data was progressively loaded.


Hehe, I should have be direct the first time. You are positive trolling. Your post is just spamish technical uninformative bullshits.

>> these are sent to the client separately

in different socket packets? in different ports? in different 'gameobjects'? in different 'line of caches'? in different 'structs'/'classes' or in different meshes? sorry but in this case, meshes and classes are not 'send-able' into a network...

>> merged in the graphics card

sorry but graphics cards have no "merge" function at all - I mean, it has no sense again.

and so on...


I don't think I'm the troll here.

The client is the WebGL application running in the user's browser. It requests the appropriate data from the Nokia's server for the area that the user is looking at. This is what I mean by "being sent to the client".

By "merged in the graphics card" I mean that the final image is composed in the graphics card.

Have a look at vterrain.org to get an idea of how these things work.


I knew I was right.

You were going to deceive people with an aggregate of technical non-senses and obviousnesses mixed with some good links.

I knew that your initial post was as empty as what you are saying now: the server is sending data (and the right one bob!) to the client, then the graphics card is composing the images (Dude!) and the cpu is executing some special instructions.

Good-Boy, right?

But.

It's a fact, people penalized me. I understand, (ultra)positive attitude is prefered over kindof technical and scientific honesty and demand. Even my bug report has been down-voted a lot =) Bugs are negative gnaaaa :>

Why not. I guess I have to apologize then.


I'd suggest that you probably got downvoted for saying things like "This is just false ... There is actually no benefit to explain why ...". Most of your responses provided very little information other than to disagree with the post you replied to.

Technical corrections can add a lot of value, but that kind of attitude overshadows any useful value you might have added to the discussion, and it makes it look like you don't intend to add any value at all. Effectively, it made your entire post look like a verbose "Nuh-uh!".


Well, you are right.


The mapping team at Nokia is by far the best software development team in the organization (maybe with the exception of Trolltech/qt), and it's surviving the MSFT integration. It's (largely) the legacy of the successful acquisition of Gate5 in Berlin -- and somehow the team there was able to resist full assimilation into the Borg. I was talking to a Nokian today who commented that in Nokia, "Berlin is the new Helsinki".


Your friend is right. I freelanced there earlier this year to create a prototype of Ovi Maps on Windows Phone. I couldn't stay on to see the production version through (commitments in London), but what they shipped on the Lumia (Maps and Drive) is awesome.

[Edit] Earlier LAST year, this year's only a few days old ;-)


Smooth as butter! Too bad there is no way to search or get permalinks to specific location-view combinations, but hey, it's a demo :)


I think this is destined to eventually replace their plugin-based version at http://maps.nokia.com/ - at least for WebGL browsers.


I wonder if MS will ever support WebGL? When they had 95% market share they could afford to not support new tech safe in the knowledge that the rest of the industry wouldn’t bother coding to it. Now they’re sub-50% in a lot of sectors and there are a lot of visually impressive tech demos coming out that they don’t support.


I think that Microsoft will be find themselves forced to support WebGL.

Why?

Games.

WebGL is really impressive technology. The combination of a widely-deployed widely-used language (Javascipt) with high performance graphics (WebGL) make for a surprisingly capable platform for cross-platform game development. Once WebGL arrives on mobile and a full screen gets added on the desktop, I think it will become very, very popular.

Of course, the graphics capabilities aren't ask good as full-power OpenGL or Direct3D, but they're plenty good enough for a lot of applications.

This is why I think that Microsoft will be forced to support it: they'll have a hard time convincing the public to buy into Microsoft's platform if the public can't play their favourite games on it.

In the meantime, Chrome Frame provides WebGL inside Internet Explorer and you don't even need admin rights to install it :-)


They probably won't, which is why in an earlier thread I said that IEN will always be IE6. My assumption would be that they'd do something along the lines of webDirectX and we'd have to create a shim to give it a common interface.


Agreed. Given DirectX and Direct3D, MS is unlikely to support a derivative of OpenGL. That may eventually change if WebGL becomes widely adopted, forcing their hand, but current lack of support in IE9+ is a major inhibition to adoption. I doubt they would create a competing standard (such as WebDirectX).

Instead, MS is pushing performance improvements and hardware acceleration for Canvas and SVG. This is NVIDIA, but to give an example of the possibilities:

http://developer.nvidia.com/nv-path-rendering

IMO, focusing on these isn't a bad thing, because these 2D technologies are substantially easier to use (e.g., SVG is declarative and integrates with CSS). Though, WebGL is obviously more expressive.

We may also see some WebGL-derived technologies make their way back into CSS + SVG. Similar to SVG filters for CSS:

http://www.w3.org/Graphics/fx/

Proposed GLSL shaders for CSS:

http://www.adobe.com/devnet/html5/articles/css-shaders.html


(And the irony behind "WebGL is a derivative of OpenGL" is that on Windows (at least for Chrome and Firefox), WebGL is actually all based on Direct3D, via ANGLE:

http://code.google.com/p/angleproject/)

I would love to see shaders on CSS, but GLSL is such an ugly layer to add on top of a fairly nice design. Notice how the SVG filters are so much simpler to specify than the GLSL-on-CSS proposal.

I would much rather that Adobe designed a more restricted, declarative little language which would easily compile to GLSL, than bolt an almost-turing-complete C variant on top of CSS which is hard to reason about, hard to guarantee safety (most of the webgl-crashes-video-drivers issues have still not been solved, aside from the hamfisted "we will block webgl if we see this set of drivers" solution), and hard to interoperate.


I disagree. I think they're hand is going to be forced because the future is mobile which IE does not control.

I wouldn't even be surprised if MS adopts Webkit.

For MS to leverage IE 'dominance' would be a losing game. They no longer have control over the web. I have no doubt that MS will attempt to control the web with W8 but I think they will lose and do so quickly.


Hm, I’m not sure whether it’s in Microsoft’s best interest to adopt Webkit. I think they will stay with their own rendering engine.

But – as is already obvious – dominance is indeed no longer something they have, will realistically achieve ever again or are able to leverage. If they want to have any say at all when it comes to the web’s future they have to play the standards game. They have to cooperate.

Microsoft is keenly aware of that (though maybe not entirely comfortable), as is evident from the direction they took with IE.


The Microsoft Trident engine is getting more powerful with each iteration. Knowing that it has been around since IE4 in 1997 and seeing where it is today show how extensible it is.

The real problem lies not inside the engine but inside Microsoft themselves. Specifically, within the .NET group. I know everyone on Hacker News loves Ruby, so I'll use that as an example. Microsoft wanted the dynamic language stylings that Ruby offered, so they spent 3 years developing IronRuby that ran on the .NET CLR. Then they suddenly dropped it without warning. Why? Because they had extracted everything they wanted from it. Keeping the technology up to date would not give them anything more than what they already had. Microsoft benefited from it, and when they no longer did, they dropped it. Everything that happens inside Microsoft's core is to strengthen their sellers: Windows and Office. If Windows or Office needs a new technology, they will take it, use it, and .NET-ify it until it becomes proprietary.

If they were to swap Trident for Webkit, it would be the same thing. IE11 built on Webkit for a few years, their development staff would learn from it, and the next release would see Trident 7 (IE12) back in form. Microsoft takes with only nominal giving because that's great for business. They can learn from outside technologies, then use that knowledge to lock people in tighter with better tech.

It's been a while since we've seen Microsoft in true form, pioneering and leveraging their weight to shape the market for their benefit. What we have right now is Microsoft in damage control mode. Moving to Webkit would be more of that, strengthening Trident by sucking the essence out of Webkit or Gecko, directing the flow of HTML5 (and pushing for MSHTML6 afterwards) would be the return of the powerhouse. It'll be interesting to see where things go, but even as someone who sees Microsoft as the best tool for the job in some certain situations, I would't place any bets on Microsoft being the dominant force on the web... ever. Luckily (for them), desktops aren't going anywhere anytime soon.

edit - I should add that, to your point (and mine), Microsoft already does use Webkit where it is advantageous for them: Mac OSX. Instead of continuing development on IE for OSX, they switched Office to Webkit for the Mac. I'd have to believe Trident would have suffered without that move (circa Office 2011).


Slight nit, but Trident never existed on Mac; it was a seperate rendering engine called Tasman (written in Israel and project-led by Tantek Çelik).


It looks like only the cities that are labeled have 3D data (buildings etc), but those that do look phenomenal. Even the trees look pretty good!


No, unlisted cities may be partially covered. There is quite an amount of coverage for Berlin, Germany, for example (far from complete though).


I just checked Copenhagen and it has 3d buildings as well.


It didn't even increase the speed of my computers fans from lowest point. And normally, a YouTube-video can be enough to do that. That's impressive.

Nokia scores a point with me here, if they keep delivering things like this I may even consider buying one of their phones one day.


Includes anaglyph 3d mode too if you put "nw.setRedBlueStereo(true, 10.0, 10.0)" in your js console.


Awesome! Though it seems to work better for my particular anaglyph glasses if you set the values at ~40


I don't have any glasses to try it with unfortunately, but cool to hear it works!


Can anyone fill us in on how they're collecting such accurate 3D detail for all these buildings? I mean are they flying airplanes with 360 degree cameras over the major cities at low altitude, for instance?


They explain it in the making of video http://www.youtube.com/watch?v=emKttWFcJ_g Basically it's car's with LIDAR and photos, planes and satellites.

The LIDAR data is also used in the Nokia City Scene app http://www.youtube.com/watch?v=_MxnUAVhdnU Worth noticing is that you can click on every building i.e. 3D information is combined with regular streetview data.


It's based on C3 Technologies' product, which was unfortunately acquired by Apple, so don't count on Nokia's contract being extended. It uses a custom aerial camera system and photogrammetry toolchain to create 3D data with minimal human intervention.

Australian company Nearmap started with exactly the same goals and have a similar product (custom aerial photography system with automated processing), but they don't seem to have figured the 3D photogrammetry part out yet.


I'd really like to know that too. Also, are they collecting data themselves? If not, where are they getting it from?


It's based on a virtual cityscape, which is painted with images taken much like StreeView's. The 3D models are built with data from Navteq's Journey View system, using lidar (http://en.wikipedia.org/wiki/LIDAR). Photos are then stitched and rendered onto the 3D models.


Thanks for the details! This technique works surprisingly well. (One of the artifacts I was able to find is at the base of the Bay Bridge in SF's SOMA -- there's a vertical wall that has the street surface projected up along it rather than an actual hole underneath the bridge. That does seem like a challenging case for airborne lidar + stitching.)

Another impressive spot is the top of the Stratosphere Tower in Las Vegas -- it manages to capture the spike at the top fairly well. It'd be interesting to know how much hand-editing they did for sites of interest like that, and how they represent hand-edits in a way that can be re-applied when new lidar datasets come in.


Fantastic. Is there a way to create a link to a given viewpoint location/direction/zoomlevel? That would make it possible to share views of the world, always nice.

When zoomed into an area for which there is 3D building coverage, it feels almost game-like. And I say that from a vantage point of some relevance. :)


I wonder how photography will be affected by this sort of technology in the not so distant future, as the images and point cloud data increase in definition. For instance, instead of waiting for the perfect weather conditions for the desired picture, the "photographer" could simply manipulate lighting and such, then render the scene in high definition.


Wow, I am impressed with how responsive that is. I'm on an Air and I generally don't have good experiences with this sort of thing.


What is the config if I may ask? I'm thinking of getting an Air for dev work (13" 2011 model).


No problem. I believe mine is a late 2010 model. It has the 1.4 GHz Core 2 Duo with 4 GB of RAM. It's 11'' rather than 13''. Performance is generally excellent and it's probably the best computer I've ever owned in terms of outright utility simply because it's approximately the size of a Kindle DX and I can take it anywhere without a thought. The fact that it's usable and responsive essentially instantly after opening the lid is also huge.

The only real problem comes with pushing pixels around, as the other reply mentioned. I can watch QT video fine, but if I go to Vimeo or YouTube, I can't really get a good playback out of Flash. Generally Flash is bad on this machine. I'm a long time Mac user so I'm not too unused to this, but it seems a little worse than on an iMac or something like that.

I think if you use an IDE for dev work you should check it out at an Apple Store or something to make sure the resolution works for you. That's really the only thing I'd consider. My 11'' is really just too small to me, having been spoiled by dual 30''s on my desktop. If you use a text editor or vim/emacs it'll probably be fine, but IntelliJ or Eclipse or whatever just have too many windows to manage in the space, in my opinion.


My choice is between MBP 13" and Air 13", and looking at the online store, Air actually has the better resolution (~1440x900 vs. ~1280x800). I was surprised at this, but I have used Xcode on 1440x900 on a 15" MBP and I was ok with it.

Thanks!


I have the 4GB 11" Air, and it's quickly becoming my favorite dev machine, even when I'm next to a monster dual quad core Xeon box. The Intel card is fairly slow at pushing pixels around (so if you develop graphics you'll notice slow texture access, etc.), but is mostly capable.

The simple deciding factor is a SSD in the laptop. It's hard to overstate how much of a performance difference these make in compiling, localhost-served webpages, etc.


I am almost certain I will get the Air, it's turned into a very capable machine in the latest iteration. Thanks.


Who thinks it's better than Google maps?


It looks as good or better than google earth, particularly the trees, but the (texture) caching seems to be limited, which could be obviated by using local storage.


Yeah, it lacks a search box but the up close 3d renderings look better to me than what Google currently has available to the public.


As a pure technology demo, it's more impressive than a plain satellite view. But the product isn't comparable.


Er, not me. Where's the search box?

Stuff like this does look genuinely awesome, but Google Maps provides a whole different set of functionality. Searching, routing, etc.


This is basically just a WebGL demo.

For actual Nokia Maps (with search and routing) go to http://maps.nokia.com/


Well, what Nokia has done is better than the way Google has it set up. At least for advanced machines that can run it. But I'm sure Google will catch up when it's a reliable standard.


Google has their own WebGL demo which isn't anything like this http://news.ycombinator.com/item?id=3106658


I've played with their streetview transition in GL... really nice.


very cool. the satellite imagery seems both more detailed and more recent than Google's.


In my area (Canary Wharf in London) with Google I can see the Crossrail works going on but there nothing happening on the Nokia maps so Google is more recent here.


Came here to say the same, google maps imagery of my area is about 7-10 years old.


I was convinced the CIA was watching me, because I've been checking Google Maps for over a year now and the same white van's been parked outside my house for all that time. I just saw that on Nokia maps it's left. Thank goodness!


Can anyone tell how old the maps are that they are using? I looked around but I couldn't find anything.

I tried zooming over my workplace in Darwin, AU and the building we work in isn't even there.. (its roughly 5years or so old)

I imagine there is probably a mash up of old/new map data in there depending on the population of a given place..


Wow. Very impressive. I'm in Boston and was able to pull an almost disturbingly detailed 3d view of my balcony.


First time I saw Google Street View, I was sitting on my balcony with my laptop. I looked at the Google image for my street, and it was me sitting on the balcony with my laptop. I had to do a double-take before I realized the picture was taken a few weeks prior.


Amazing how well the software renders thousands of objects. On close inspection, I find the post-apocalyptic aesthetic of the rendering geometry very appealing. http://i.imgur.com/dNYer.jpg


Here's the APIs for the older version: http://api.maps.nokia.com/. Hopefully they'll be doing some documentation on using the new WebGL based API!


Nokia (Ovi) Maps was always the best built in GPS solution for smartphones.


It's really nice! I hope that Nokia releases this eventually.


Presumably this is a precursor to replacing their 3D plugin version at http://maps.nokia.com/ - at least for browsers that support WebGL.


On Safari: This requires latest Chrome or Firefox.

On latest Chrome: There was a WebGL compatibility problem. Please check system settings.

Yay standards?


It really kind of irks me when people complain about standards in a web demo. HTML5 is not standardized. These demos are no more than POCs written to show what the technology is capable of and where the organization sees themselves going forward.

When you see run-the-business type web apps being written in non standard technology, then you can complain. When you see a neat toy being written in non standard technology, take it for what it is.


Well which is it then?

Is it a demo of proprietary technology that happens to be baked into a couple of specific web browsers and video card drivers, or is it a demo of what's possible using a (new, emerging) set of "web" standards?

Because "WebGL" certainly sounds like the name of a standard to me, and very few people expect a "web" demo to care about which brand of video card they have installed.

And since you brought it up, "HTML5 is not standardized" is a little disingenuous. Regardless of its ratification state, companies claim and market HTML5-ness precisely to signal their commitment to open standards as opposed to proprietary technology. Or maybe it's come to mean "anything that isn't Flash." In any case, the implicit promise there is that users will enjoy app functionality with minimal-to-zero worries about client-side configuration or component choice.


Well, maps.nokia.com still goes to their Javascript implementation so I'd say the WebGL form isn't the run-the-business site as of yet.

WebGL has a 1.0 specification, but still is not a standard as defined by the W3C/Web Hypertext Application Technology Working Group (WHATWG). Specification is one of the steps towards reaching a standard, so WebGL and HTML5 are well on their way but not there yet. Standards usually don't care what name they're referred to as (much like 4G-advertised mobile service that doesn't actually reach 100mbps/1gbps like the standard dictates).

At this point (much like with the aforementioned 4G) HTML5 means about as much as "Web 2.0" does. It's a set of competing implementations with many cross-platform features that are almost guaranteed to make it into the final standard, and a few vendor-specific implementations that are hoping to make it (if they prove their worth). Your assertion of 'Flash-like content that isn't implemented in Flash' (to paraphrase) is quite accurate in current implementations.

To sum it up, the core HTML5 that companies actually market towards is all but standard (offline storage, AJAX-like content control, Canvas, etc). The really cool things that make the front page of Hacker News and Reddit and require you to be running the beta Chrome or nightly Firefox are generally things that the vendor is hoping will make the standard. Marketing is a powerful thing, but not always accurate.


That's all reasonable enough, but does it really excuse this UX?

1. User visits website with late-model hardware/OS. 2. Website says "this site requires browser foo." 3. User installs browser foo and reloads website. 4. Website says "error - check system configuration."

A technology demo with highly-specific client requirements, especially on the web, especially when the demo plays the look-mom-no-plugins card, should try to enumerate the actual requirements. In this case, the requirement that after installing latest Chrome, the fool at the keyboard navigate to chrome:flags and hit the big "turn WebGL on" toggle.

I understand why these types of doc omissions happen, but it's really a pretty serious bug. Every user that hits 1-4 above is a user who is actively dissuaded from caring about the technology that the rest of the site was designed (at non-trivial expense) to promote.


The production version of Nokia Maps 3D requires a proprietary browser plugin to run. This is an immense improvement to the right direction.


16.0.912.63 Chrome on OSX here, works beautifully


chrome detects if your graphics card is on some whitelist of gcards known to work with webGL. you can override (force on) this in chrome:flags I believe


Woah, what? Why have I not seen chrome:flags before? This is like Christmas


No go here, same chrome version...


Same on windows7.


Interestingly works with latest Webkit Nightly Build, so apparently they are doing the right thing and using feature detection.


I got the same message on Chrome (15), but it disappeared after few seconds.


I've been wanting an API key to nokia's 3D maps for a while. There are a lot of gaming possibilities there.


Isn't the WebGL stuff done by Navteq, that's who the copyright says.


Nokia owns Navteq.


Does anyone else have all text on globe upside-down?


Are you in Australia? Just kidding, since I don't have the latest version of Firefox, despite clicking the check for updates box, I can not check this out. I even tried downloading chrome but it failed. Naturally google needs javascript to let me download a file, and even when it was enabled it did not download.


Is there a writeup anywhere on how this is done?


downtown Chicago is lots of fun


yeah, that's pretty cool.


I wonder how do they get the building facade textures/polys... It seems they have not only vertical satelite shots but also inclined ones.


Not bug free: http://fredbrach.posterous.com/pas-de-sujet This is the limit between the 3D data and the flat ones.


No scale? Not a map.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: