Hacker News new | past | comments | ask | show | jobs | submit login
Fix photogrammetry bridges so that they are not "solid" underneath (2020) (flightsimulator.com)
67 points by Bluestein 13 days ago | hide | past | favorite | 19 comments





I suspect that Bing's photogrammetry pipeline/presentation involves converting everything into a heightmap, because overhangs in Bing Maps are vanishingly rare and I've only ever seen them on prominent landmarks (Brooklyn Bridge, McCormick Place, etc.) which might be hand-placed exceptions. Even the trees have "skirts" which drop straight down to the ground.

This is a major contrast to Google Maps, which uses a mesh capable of representing overhangs, including trees and most bridges. (Bridges which are fairly wide and close to the water still get turned into causeways.) The fidelity is pretty remarkable - it'll even make an attempt at rendering power poles and traffic lights.

There is a mod for FS2020 which attempts to use Google Maps data in place of Bing, although I'm not sure if it's just imagery or the photogrammetry as well. I haven't tried it myself so I can't speak to how effective it is (and I suspect both MS and Google might have something to say about it in their ToS).


It's just imagery afaik.

As for the overhangs, good insight! I have been observing a Czech company making true 3D maps from aerial data, here's one of the interesting showcases with extraction of individual trees from scans: https://www.melowntech.com/vadstena-ai-based-automatic-tree-...


This is what happens to photogrammetry reconstruction around water and other areas where there are not enough camera angles seeing underneath the bridge to determine the 3-D geometry. This is usually rectified post processed by Google or someone serious about representing these bridges nicely. Often this is a 3-D modelers time, or if you are really enterprising, you can fly underneath the same bridge with a smaller drone and have the camera pointing up so that you can get really accurate photos underneath from which to do SfM/COLMAP. I wish this was solvable by GenAI, but the whole thing of garbage in garbage out really applies here. You don’t know what the structure is of that bridge looks like underneath the roadway unless you’ve taken multiple photos of it, and no two bridges are identical. I’m sure someone could train an AI on every bridge imaginable and we could get something better?

I don’t think AI is necessary here, nor more data. You can post-process these bridges by voxelizing them and carving out the bottom. There are many heuristics to use. And with sanity checks to discard bridges the post-processor can’t handle, it could successfully clean up 80% of them.

I have built systems that turned organic meshes into voxel and sparse voxel octree representations, modified them, and produced meshes of various parameters. It is doable, sometimes you just need to dig into the academic papers for a month or so.

Probably the team just has higher priority work. Building this post-processor for bridges seems doable by one engineer over a quarter. But the bridges being represented better than they are today won’t likely sell more copies of the flight simulator. So it’s probably very low priority to fix.


I don’t know about that. Flying under bridges has got to be one of the most popular simple joys available in a flight sim.

Which is why KSP was updated to include a skyway near the launch site. But if you are flying under bridges, you are probably busy looking elsewhere to notice the less-than-photorealistic textures under the road deck.

What’s the difference between AI and heuristics?

One requires a data centre worth of GPU’s, more energy power than an island country to train and run…and the other approach works?

I forgot to mention you could also perform terrestrial laser scanning or SLAM underneath the bridge and fuse those together with the aerial photogrammetry to get a unified model, but this is even more effort and more post process.

I have to wonder at what level you want a truly accurate representation of all bridges so that anyone could see them. Bad actors can ruin a lot of cool stuff.


I would have expected that flying a drone under a major bridge would be a good way to get the attention of serious men with guns.

Too bad boats cost so much. I see those things pass under bridges all day long without breaking a sweat

> I wish this was solvable by GenAI, but the whole thing of garbage in garbage out really applies here. You don’t know what the structure is of that bridge looks like underneath

Putting a few bridges into Google images similar to the thread I quickly find photos of the underneath.

From the thread - Walt Whitman Bridge - https://www.google.com/search?q=walt+whitman+bridge+undernea...

GenAI isn't a magic wand that solves cancer except a lack of data.

GenAI is garbage, although ironically this is something it might be ok at. Compared to a blob it might be able to fake it until we move onto better AI's not LLMs.


On OpenStreetMap it's possible to "query features" and see that a piece of road is marked as a bridge. The first solution I can think of would be to use that data (and give OSM a nice donation) and make these roads only ~2 meters thick and hollow underneath.

As someone who played a lot of OpenDroneMap, I think that will be influenced by how the collection of photos was taken, if photographing looking down directly (nadir), there won't be much information to be derived from the bottom of the bridge and the software will make a best guess with the data they were given.

ODM recommends flying in two patterns, separated by 20° with a 5° forward-facing camera to avoid some of the effects of photographing with the nadir, but this doubles the time of the survey.

https://docs.opendronemap.org/tutorials/


This has less to do with photogrammetry and more to do with height maps, I'd think.

Photogrammetry is perfectly capable of modeling holes. Height maps are not, and this is exactly what I'd expect from a height map based approach.


I wonder if there is any research into using reflections in water as an image source for photogrammetry. Presumably they would show the underside of the bridges

How do they look in Bing Maps 3D View? Or are they just not represented in 3D at all?

Probably identically, Flight Sim's world data primarily comes from Bing Maps.

Remembering a 300-story 10x6m building, also from OpenStreetMaps.

https://www.theverge.com/2020/8/20/21376978/microsoft-flight...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: