Hacker News new | past | comments | ask | show | jobs | submit login
OpenDroneMap – Simple Images into 3D (github.com)
179 points by punnerud 27 days ago | hide | past | web | favorite | 39 comments



It would be nice to know how this compares with other open source and commercial photogrammetric offerings.

Free:

- AliceVision https://alicevision.github.io/

- Colmap https://colmap.github.io/

- etc.

Commercial:

- Agisoft Metashape (formerly Photoscan) https://www.agisoft.com/

- Zephyr 3D https://www.3dflow.net/3df-zephyr-pro-3d-models-from-photos/

- Drone deploy https://www.dronedeploy.com/ - etc etc etc

See also:

https://en.wikipedia.org/wiki/Photogrammetry

https://en.wikipedia.org/wiki/Structure_from_motion


We are using ODM for generating orthomosaic photos and it produced better results than the commercial offerings we have seen (In some cases MUCH better). Note this did require us to tweak the settings some but overall it was pretty easy process.

Things of Note: 1) ODM didn't quite scale as well to very large number of photos. (I think this may have been fixed now). 2) Our use case was pretty brutal. ~1 cm resolution photos of a mostly uniform row crop fields. Some of the other tools were really bad at not rotating the rows 90 deg in parts of the field. 3) I was in on the POC and selection process but not the production runs so I'm not sure how things have changed over the last year. 4) Some of the comparisons where with different photos (Looking at a custom full path) 5) We didn't look at 3d much at all, but it was pretty cool to be able to see 3d tractor tire marks across and freshly planted field. The 3d quality would really fall off at the edges when there was less overlap. I think it was only good in relative hight as well but our tests were very flat.


One of its dependencies, OpenSfM, does a lot of the heavy lifting with regard to generating things like HOG descriptors (since SIFT = $IFT)

That project used to have a bug with their multithreading implementation where if your photo sets got too big children threads would terminate but fail to get reaped and your heap size would just continue to grow until things kaboomed at the SfM phase.

Not entirely their fault though as it stemmed from a deeper issue with Python stdlib's pool implementation.

Back then, with certain photosets I would literally run OOM on my homelab which was equipped with 128 GB DDR4

Should be fixed by now though.

You wouldn't happen to work for a small MPLS based startup would you?


I used both ODM and Drone deploy and was amazed at the results that Drone deploy gave me. ODM results were unfortunately poorer (the generated models were far less detailed) plus it took a loong time to analyze few hundred pictures on my laptop, while Drone deploy model was ready pretty quickly. So while I love good open source software, it feels that ODM has still long way to go.


At work, we tried DroneDeploy and Agisoft, but we settled with Pix4D, expensive, but we got better accuracy and the people who operate the drones found it easier to use.


I was thinking the same thing about OpenAerialMap https://openaerialmap.org/


This morning I was watching a "video editing" youtube video where he made a 3D terrain object to load into Davinci Resolve (commercial but free for personal use), and track to the palm of his hand so it looked like the terrain was floating above his palm.

He used http://terrain.party/ to get height information as an image (lighter colors are high, darker are low), then a script scrape together images from Google Maps, which he transformed using the height information.

At that point he was using a GUI to view the resulting 3D object and manipulate the camera and lighting. Blew my mind.

I followed it part of the way, but got stuck with getting the google images. He was doing a valley close to an object recognizable from space close to the coast, I was trying to find the cabin we stayed at last weekend in rural Colorado.

Pretty mindblowing what you can do with free video editing software today.

https://www.youtube.com/watch?v=qP6wVajeI_M


That's a really roundabout way to do that though... He's basically taking free height data, converting it to a raster image, and then approximating it in the editor (which tries to convert the raster heightmap back to a proper map). The actual satellite imagery might be harder to (legally) get (and he basically suggests removing the copyright notices in the video).

It's cool that it works, but it's also the "wrong" way to go about it considering all the tools around to work with open, free GIS data.


Indeed. I was stuck at getting the correct images for the area I had the height map for. My plan was to use the NAIP images, which I have sitting here on one of my computers, processed, and is free.

But, I ran into some weirdness with Davinci Resolve where it was getting "Media Unavailable" for some frames of the video. Once I went down that rabbit hole, I was just tired of that video and wanted it done. So I didn't add a spinning 3D map of the area, just added some titles and music and called it a day.

Agree that it's not cool to remove copyright.


Google Earth Pro is now free, and has an export high res image function


Not that high -- 4800 x 2848 is the max resolution.


Interesting point: while the software is free, the web interface (webodm) seems to have a paid service integrated to process the images in the cloud (webodm lighning [1]). Since resource demands seem quite high (lots of RAM for larger image sets) this looks attractive and seems like a good way to finance an open source project.

1: https://webodm.net/


Are there editors that allow you to manipulate point clouds or ‘digital surface models’? (Not sure what those are) I’d like to use something like this to digitize my property and model changes to it (building, landscaping, etc).





Nice! Thanks!


I assume this uses GPS data to help with stitching the photos together.

Do drones capture orientation data beyond GPS position (for example, pitch, yaw, gimbal direction)?

I have tried adding motion graphics to some of my drone videos in After Effects but as far as I can tell it discards all of the metadata and infers the camera position based solely on the video frames. This is really slow and imprecise; I’ve been wondering if there’s a plug-in that will use the recorded telemetry from the drone to set the camera orientation.


I'm familiar only with the DJI drones and they do capture Pitch, Yaw and Gimbal and a few other variables as well. It stores them on the "extended" part of the EXIF.


Depends on the drones, but professional photogrammetry drones certainly do. You often have position (latitude, longitude and altitude in WGS-84), attitude (yaw, pitch, roll) which already contains the gimbal orientation (e.g. the drone applies the lever arm to go from the IMU position to the camera optical center and output those) along with their covariances (accuracy) as well as timestamping. If the camera has a rolling shutter, you might even have imu data (i.e. accelerations) to correct distortions. Another important data for the reconstruction is the focal length.

Ultimately, it is the job of the reconstruction software to use those data. Usually, they use them to initialise the process, and use the covariances as constraints, then they run their optimiser based on that. If your accuracies were too tight, the reconstruction will often fail as they trust you on the accuracies. However, the better/more accurate the data (e.g. RTK data), the faster/easier it is for the software to do its job.


All ODM needs is a centroid Lat/Lon from the EXIF tags on each photo. You can optionally add a text file that gives a pixel an exact Lat/Lon. From this it matches features from the photos to calculate the exact pitch, yaw, direction. I think you can specify those in the EXIF but it is really hard (i.e. expensive) to get these to high enough quality to render the final image.

In our case the high quality gimbals where more than our drones. (~$15k each)


OpenDroneMap use a text file where you specify image name and pixels with their corresponding GPS coordinates for those pixels. Pitch, yaw, gimbal direction can probably help you with optimizing those algorithm. I am thinking of using OpenCV for image recognition of a large “chess board”, from there it should be easy to get both the GPS-coordinates and pixels in the images seeing this calibration board.


I don't think it uses GPS for that. You can match two partially overlapping images. From the movement of the feature points in the overlapping areas you can calculate your camera orientation, and thus the 3d point cloud of feature points ( or something close to this ). I think it is called structure from movement, the readme links a paper, maybe it's worth reading for you.


Interesting, is there a technical reason for not using that extra data to help with the process?

For example, another commenter mentioned

> Some materials don't really contain any surface details that the algorithms could use to attach feature points to, so they will be blank. Large white walls and large windows are especially difficult.

Seems like you might be able to position some of these with orientation data from the camera.


The GPS data is probably being used as a prior. The GPS metadata is accurate but imprecise, while SFM more precise. GPS is likely used for pose initialization, while SFM is then used to refine the pose.


Would a high-quality pre-registration via precise coordinates and orientation yield better (and faster) results though? If the algorithm did not have to guess the camera parameters I would imagine a benefit.


In practice, there are a couple of ways to handle this. In one case, EXIF data gives you a lot of information (sensor size, camera/drone make, etc.) that you can extract very good camera intrinsics from. This information can also be fed into big EXIF databases, from which you can obtain pretty accurate camera parameters.

Additionally, camera intrinsic calibration is a relatively solved problem, especially if you know that every single camera has the same intrinsics.


In the lab I work in we have internal tools for doing something like this for precision agriculture, but I have never heard of an open source solution. UAVs generally don't record their orientation or GPS data without setting up a separate log file (I have no idea how you do this on Pixhawks. We use BYU's autopilot or our own autopilot).


DJI drones stores that on the image itself. No need to use the logs.


What kind of image quality would one need to get something useful out of this. Useful : Create a 3d model of a house with surrounding trees, so I can see the effect of those trees on a planned solar installation. Or create a 3d model of a garden to see where the sun and shadow are over the course of the year.


The better the photograph, the better the output quality. Photogrammetric output will always have noise. Some materials don't really contain any surface details that the algorithms could use to attach feature points to, so they will be blank. Large white walls and large windows are especially difficult.

Don't crop the photos. Prefer slight underexposition. For each detail you want to capture, have it visible at least in three photos. Capture photos from different position (ie. rather move yourself than just turn the camera) with at least 30% - 50% overlap between photos.

You can find lots of tutorial in the internet. Search term that hits gold is 'photogrammetry tutorial'. For example:

https://medium.com/realities-io/getting-started-with-photogr...

If you just want to try out the technique, I suggest you download the free test version of Zephyr 3D. You can just give it a bunch of photos and it does a fairly sane ops with the default options.

https://www.3dflow.net/3df-zephyr-pro-3d-models-from-photos/

If you like it, you can start experimenting with free and commercial tools including the one in the topic of this thread (see my other post in this thread for some other photogrammetric tools).

If you have a large set of photos you will start to need a fairly beefy computer with tens of gigs of RAM the least. But, you can get pretty nice results for e.g. just a single house with just a regular modern desktop PC.


I love the idea of your proposed application. However, I don’t see it as a problem needing solving. It’s fairly easy to survey a solar installation and potential shadows using basic trigonometry and sun charts. I would imagine many solar installers can do this in under an hour.

I was running some experiments with Drone Deploy and my DJI Mavic to document road damage and flooding in my neighborhood because the city has neglected to maintain our storm water infrastructure. Perhaps I wasn’t using high enough overlap but I found I could not get quality orthophotos at the desired resolution because of too many trees. Using a higher altitude (lower resolution) would yield quality orthophotos but then the details I wanted to see like potholes and standing water were not clear enough. My point is I think these tools can work really well when the conditions are right, as evidenced by all of the high quality samples and demos, but there are still a lot of situations where you’re just not going to get very high quality due to conditions of the subject area.


I would say the quality is relative to your input and to what type of application you need results for.

If you are gonna use the pointcloud as a cheaper and faster alternative to LIDAR and need to take measures with centimeter precision, you will need a lot of overlap (we use 75%) and the drone to take the photos automatically to be sure it covers most of the desired area. I would also recommend a commercial tool for the task. We have jobs that take one week to process with state of the art hardware.

But if you just want to do it for fun, you won't need to get this complicated and can do even with your phone.


Used this pretty extensively, and followed their dev branches. FWIW if you want quality results the best option is Agisoft, on a GPU machine w/ some custom python to automate.


FWIW, ODM was slightly better quality than Photoscan in our tests. We only tested orthomosaics but ODM had less error to our ground control points (note we didn't feed the givens into these tests) and less aberrations.


Does anyone have any experience with the ZED camera? They claim they can generate 3d maps down to 1cm resolution without LIDAR using these kinds of methods.


The ZED camera works by using two high-resolution RGB camera, finding matching points between the two, and using that to calculate depth. Because they are using high-resolution RGB cameras, they can generate very high resolution depth maps. However, this is also their main drawback; with just RGB images, depth map calculation will fail on texture-poor surfaces, such as plain white walls, because it's very difficult to match points.

They are fundamentally performing photogrammetry with two cameras at a fixed distance.


You can solve this edge case using a light source, with a method called “structured light”: https://en.m.wikipedia.org/wiki/Structured-light_3D_scanner


this might be off-topic, but anyone know a good software for indoor use? i.e turning indoor images into indoor 3D models.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: