- AliceVision https://alicevision.github.io/
- Colmap https://colmap.github.io/
- Agisoft Metashape (formerly Photoscan) https://www.agisoft.com/
- Zephyr 3D https://www.3dflow.net/3df-zephyr-pro-3d-models-from-photos/
- Drone deploy https://www.dronedeploy.com/
- etc etc etc
Things of Note:
1) ODM didn't quite scale as well to very large number of photos. (I think this may have been fixed now).
2) Our use case was pretty brutal. ~1 cm resolution photos of a mostly uniform row crop fields. Some of the other tools were really bad at not rotating the rows 90 deg in parts of the field.
3) I was in on the POC and selection process but not the production runs so I'm not sure how things have changed over the last year.
4) Some of the comparisons where with different photos (Looking at a custom full path)
5) We didn't look at 3d much at all, but it was pretty cool to be able to see 3d tractor tire marks across and freshly planted field. The 3d quality would really fall off at the edges when there was less overlap. I think it was only good in relative hight as well but our tests were very flat.
That project used to have a bug with their multithreading implementation where if your photo sets got too big children threads would terminate but fail to get reaped and your heap size would just continue to grow until things kaboomed at the SfM phase.
Not entirely their fault though as it stemmed from a deeper issue with Python stdlib's pool implementation.
Back then, with certain photosets I would literally run OOM on my homelab which was equipped with 128 GB DDR4
Should be fixed by now though.
You wouldn't happen to work for a small MPLS based startup would you?
He used http://terrain.party/ to get height information as an image (lighter colors are high, darker are low), then a script scrape together images from Google Maps, which he transformed using the height information.
At that point he was using a GUI to view the resulting 3D object and manipulate the camera and lighting. Blew my mind.
I followed it part of the way, but got stuck with getting the google images. He was doing a valley close to an object recognizable from space close to the coast, I was trying to find the cabin we stayed at last weekend in rural Colorado.
Pretty mindblowing what you can do with free video editing software today.
It's cool that it works, but it's also the "wrong" way to go about it considering all the tools around to work with open, free GIS data.
But, I ran into some weirdness with Davinci Resolve where it was getting "Media Unavailable" for some frames of the video. Once I went down that rabbit hole, I was just tired of that video and wanted it done. So I didn't add a spinning 3D map of the area, just added some titles and music and called it a day.
Agree that it's not cool to remove copyright.
PolyFit - https://3d.bk.tudelft.nl/liangliang/publications/2017/polyfi...
Do drones capture orientation data beyond GPS position (for example, pitch, yaw, gimbal direction)?
I have tried adding motion graphics to some of my drone videos in After Effects but as far as I can tell it discards all of the metadata and infers the camera position based solely on the video frames. This is really slow and imprecise; I’ve been wondering if there’s a plug-in that will use the recorded telemetry from the drone to set the camera orientation.
Ultimately, it is the job of the reconstruction software to use those data.
Usually, they use them to initialise the process, and use the covariances as constraints, then they run their optimiser based on that. If your accuracies were too tight, the reconstruction will often fail as they trust you on the accuracies.
However, the better/more accurate the data (e.g. RTK data), the faster/easier it is for the software to do its job.
In our case the high quality gimbals where more than our drones. (~$15k each)
For example, another commenter mentioned
> Some materials don't really contain any surface details that the algorithms could use to attach feature points to, so they will be blank. Large white walls and large windows are especially difficult.
Seems like you might be able to position some of these with orientation data from the camera.
Additionally, camera intrinsic calibration is a relatively solved problem, especially if you know that every single camera has the same intrinsics.
Don't crop the photos. Prefer slight underexposition. For each detail you want to capture, have it visible at least in three photos. Capture photos from different position (ie. rather move yourself than just turn the camera) with at least 30% - 50% overlap between photos.
You can find lots of tutorial in the internet. Search term that hits gold is 'photogrammetry tutorial'. For example:
If you just want to try out the technique, I suggest you download the free test version of Zephyr 3D. You can just give it a bunch of photos and it does a fairly sane ops with the default options.
If you like it, you can start experimenting with free and commercial tools including the one in the topic of this thread (see my other post in this thread for some other photogrammetric tools).
If you have a large set of photos you will start to need a fairly beefy computer with tens of gigs of RAM the least. But, you can get pretty nice results for e.g. just a single house with just a regular modern desktop PC.
I was running some experiments with Drone Deploy and my DJI Mavic to document road damage and flooding in my neighborhood because the city has neglected to maintain our storm water infrastructure. Perhaps I wasn’t using high enough overlap but I found I could not get quality orthophotos at the desired resolution because of too many trees. Using a higher altitude (lower resolution) would yield quality orthophotos but then the details I wanted to see like potholes and standing water were not clear enough. My point is I think these tools can work really well when the conditions are right, as evidenced by all of the high quality samples and demos, but there are still a lot of situations where you’re just not going to get very high quality due to conditions of the subject area.
If you are gonna use the pointcloud as a cheaper and faster alternative to LIDAR and need to take measures with centimeter precision, you will need a lot of overlap (we use 75%) and the drone to take the photos automatically to be sure it covers most of the desired area. I would also recommend a commercial tool for the task. We have jobs that take one week to process with state of the art hardware.
But if you just want to do it for fun, you won't need to get this complicated and can do even with your phone.
They are fundamentally performing photogrammetry with two cameras at a fixed distance.