It seems this might be the theoretical error that you can get when the system can identify a feature perfectly, and when it gets a usable reflection from that point at enough angles.
But this is not the case for most points on any real object (i.e. not a chalk-coated gauge block), so you're definitely not going to get a model where the maximum error is 10 microns.
The results are certainly impressive, but hackster.io is taking liberties with that headline. It's not a realistic accuracy and the author of the project doesn't really seem to be making that claim.
I don't think I follow your reasoning about 'real objects'. Real objects can (and usually must) be coated with scanning sprays, like the Aesub Blue the author notes in several places that he uses. These sprays then vanish completely from the model after a few hours. The entire point of these sprays is to ensure that your model is covered in features that are perfectly identifiable.
The author makes the 10 micron claim in several other places, including https://www.reddit.com/r/photogrammetry/comments/k5dbtk/auto... about a totally different model. That thread discusses ways to quantify accuracy and generally agrees that the rig should not be capable of such feats.
But then there are images like https://www.reddit.com/r/OpenScan/comments/ls17tt/closeup_of... in which his scans are picking up the silk screening on the surface of the board, which at least lends some credence to the idea that he's getting 10 micron accuracy.
This might be a good place to ask, because I've been wondering about this for a while: do any of these scanning sprays react with copper, silver, or gold? I can't ever find an actual ingredients list for them, and all the MSDS ever lists is various hydrocarbons, which I know won't react.
Cyclododecan has been the industry standard for vanishing 3d scanning sprays, but as Cyclododecan is no longer available from the textile industry, this kind of spray does not exist anymore.
Aesub Blue is a rather new product from a young German company. I have met the founders last year and they are really professional and transparent. Just check out their documentation.
To be fair, as the product is so new, possible interactions with materials are still not perfectly understood. But from what I have seen, the feedback from the professional scanning community is very positive.
Hi, this is Thomas, the guy behind OpenScan. I really would like to clarify several things.
- I have never made such a bold claim of 10 micron accuracy for the device. Yes, I made several scans, where the accuracy is quite good, but I would never say, that this can be reached under all circumstances.
- To be fair, I have been kinda click-baity in the past on a handful occasions, but everyone who is doing at least 5mins of research will find for instance this one here: https://en.openscan.eu/quality
- Comparing this 100-200€ device with an industrial 3d scanner is like comparing a hobbyist 3d printer with a highend injection molding machine...
Anyway, feel free to discuss and ask me anything about it :)
Really nice work, by the way... I hope my comment didn't seem overly harsh. I know that doing testing with gauge blocks is part of the process and I really only took issue with the hackster.io headline.
Thank you and no worries, I absolutely agree with your valid criticism.
Later this year I will get some more CT reference scans to compare my results too. I think that this will give some better insights.
By the way, even on the professional market there seems to be no „universal“ benchmarking procedure to compare the various mid-range scanners (1-20k€). So this is a very interesting discussion point
So the whole project that I can find documentation for seems to be about soldering together a couple stepper drivers (rather than just using a scrap 3d printer controller which everyone has sitting around) and a ring light, where's the photogrammetry workflow part?
The author says he wants a one-click solution to getting a 3d model of a scanned object. I think that's a great goal - as easy as hitting the green button on a photocopier, except now three dimensional. From the automated picture-taking and that cloud processing workflow, this project is already remarkably close.
I think this project is about making a bunch of photos of the object and share them with SAMBA. Then you can load them into meshroom or another software. It's not all in one package.
Yes, this is the current public stage. The device is making a bunch of images, creating a zip file that need to be downloaded and manually processed. The main advantage already is, that the images and the lighting is very consistent, which is major key to successful photogrammetry. The upcoming cloud processing would further simplify this process, but as I am doing all that alone (+ help from the community) this is a rather slow progress...
What I don't get is why this isn't easy to do with a smartphone. A little motorized turntable with a stand for your phone and a calibration object covered in tracking symbols to use for adjusting the input from the camera. Why is that so impossible?
It's easy to do with a smartphone and you usually don't even need a calibration object. A software like Metashape or Meshroom can infer the lens transform and position of the camera from enough photos of the same object from different angles. The small smartphone optics actually help you by giving you a deeper DoF.
Not the best example but this http://koanlabs.org/vignettes.html rock is ~5 cm tall and my best guess is the minimum detectable feature size is about 10 microns.
EDIT: Sorry to be clear the definition of easy here is "as easy as most photogrammetry", to get good results you will universally do a lot of knob twiddling, waiting and cleaning up broken bits of mesh unless all you do is scan rocks. Scanning rocks is easy.
I don't have expertise in this area, but my experience with other engineering disciplines is that nothing is ever as easy in practice as it is in words, because there are a lot of things that can go wrong or not be compatible with one another
"The devil is in the details" and "talk is cheap" are common expressions for a reason
A few years back I tried an iPhone app (123D Catch from Autodesk) that allowed creating 3D models just by manually moving the phone around an object. No turntable or stand needed.
It worked surprisingly well, but was unfortunately discontinued. Since it was cloud based the app no longer works :(
I wonder if there are any similar apps available now?
> The sample scan provided by Megel to demonstrate the scanner's capabilities took, he claims, less than an hour of wall time and just four clicks of user interaction — though the processing requires the in-beta cloud platform or a more powerful host PC, with the Raspberry Pi unable to provide enough compute itself.
Is there an actual reason for this? Given the Raspberry Pi is a computer too, shouldn't it be able to compute the results as well, only slower than a proper PC?
Photogrammetry is extremely compute-intensive, but it is embarassingly parallel. You need to compute feature points in all your images (which is generally fairly fast, even if you use something like SIFT). Then you have to match features which, in the worst case (non sequential images) is a brute-force N^2 problem where each N (images) might have several thousand potential keypoints. Once you've done that, you do some non-linear optimisation (typically Levenberg Marquadt or some variant) to solve for pairwise camera parameters, orientations and feature point locations. The joint problem is called bundle adjustment and you might have millions of points to solve for. Finally you have to mesh all these points together using I guess something like Delanuey triangulation.
Even on a good desktop processor, it can take hours for a small scene. You really want a GPU for it.
Edit:
Here is the source for the claim: https://www.reddit.com/r/OpenScan/comments/gfottc/10_micron_...
It seems this might be the theoretical error that you can get when the system can identify a feature perfectly, and when it gets a usable reflection from that point at enough angles.
But this is not the case for most points on any real object (i.e. not a chalk-coated gauge block), so you're definitely not going to get a model where the maximum error is 10 microns.
The results are certainly impressive, but hackster.io is taking liberties with that headline. It's not a realistic accuracy and the author of the project doesn't really seem to be making that claim.