Hacker News new | past | comments | ask | show | jobs | submit login
OpenScan Offers 10-Micron 3D Scanning on a Raspberry Pi and HQ Camera Module (hackster.io)
149 points by ENadyr on Feb 25, 2021 | hide | past | favorite | 26 comments



That does not look like a 10 micron scan...

Edit:

Here is the source for the claim: https://www.reddit.com/r/OpenScan/comments/gfottc/10_micron_...

It seems this might be the theoretical error that you can get when the system can identify a feature perfectly, and when it gets a usable reflection from that point at enough angles.

But this is not the case for most points on any real object (i.e. not a chalk-coated gauge block), so you're definitely not going to get a model where the maximum error is 10 microns.

The results are certainly impressive, but hackster.io is taking liberties with that headline. It's not a realistic accuracy and the author of the project doesn't really seem to be making that claim.


I don't think I follow your reasoning about 'real objects'. Real objects can (and usually must) be coated with scanning sprays, like the Aesub Blue the author notes in several places that he uses. These sprays then vanish completely from the model after a few hours. The entire point of these sprays is to ensure that your model is covered in features that are perfectly identifiable.

The author makes the 10 micron claim in several other places, including https://www.reddit.com/r/photogrammetry/comments/k5dbtk/auto... about a totally different model. That thread discusses ways to quantify accuracy and generally agrees that the rig should not be capable of such feats.

But then there are images like https://www.reddit.com/r/OpenScan/comments/ls17tt/closeup_of... in which his scans are picking up the silk screening on the surface of the board, which at least lends some credence to the idea that he's getting 10 micron accuracy.


This might be a good place to ask, because I've been wondering about this for a while: do any of these scanning sprays react with copper, silver, or gold? I can't ever find an actual ingredients list for them, and all the MSDS ever lists is various hydrocarbons, which I know won't react.


Cyclododecan has been the industry standard for vanishing 3d scanning sprays, but as Cyclododecan is no longer available from the textile industry, this kind of spray does not exist anymore. Aesub Blue is a rather new product from a young German company. I have met the founders last year and they are really professional and transparent. Just check out their documentation. To be fair, as the product is so new, possible interactions with materials are still not perfectly understood. But from what I have seen, the feedback from the professional scanning community is very positive.


Hi, this is Thomas, the guy behind OpenScan. I really would like to clarify several things. - I have never made such a bold claim of 10 micron accuracy for the device. Yes, I made several scans, where the accuracy is quite good, but I would never say, that this can be reached under all circumstances. - To be fair, I have been kinda click-baity in the past on a handful occasions, but everyone who is doing at least 5mins of research will find for instance this one here: https://en.openscan.eu/quality - Comparing this 100-200€ device with an industrial 3d scanner is like comparing a hobbyist 3d printer with a highend injection molding machine...

Anyway, feel free to discuss and ask me anything about it :)


Really nice work, by the way... I hope my comment didn't seem overly harsh. I know that doing testing with gauge blocks is part of the process and I really only took issue with the hackster.io headline.


Thank you and no worries, I absolutely agree with your valid criticism. Later this year I will get some more CT reference scans to compare my results too. I think that this will give some better insights. By the way, even on the professional market there seems to be no „universal“ benchmarking procedure to compare the various mid-range scanners (1-20k€). So this is a very interesting discussion point


I think most 3d imaging stuff uses MTF [1,2] to measure the real spatial resolution performance.

[1] http://www.imagescienceassociates.com/mm5/pubs/What_is_an_MT...

[2] https://iopscience.iop.org/article/10.1088/1361-6501/aa9aa0/...


So the whole project that I can find documentation for seems to be about soldering together a couple stepper drivers (rather than just using a scrap 3d printer controller which everyone has sitting around) and a ring light, where's the photogrammetry workflow part?


A neat video of the openscan mini in action can be found at https://www.reddit.com/r/3Dprinting/comments/jx2aww/3d_print...

and the results from that scan along with the cloud processing photogrammetry workflow can be seen in the video at https://www.youtube.com/watch?v=EhvFq-OYa1g

The author says he wants a one-click solution to getting a 3d model of a scanned object. I think that's a great goal - as easy as hitting the green button on a photocopier, except now three dimensional. From the automated picture-taking and that cloud processing workflow, this project is already remarkably close.


https://raw.githubusercontent.com/OpenScanEu/OpenScan/master...

I think this project is about making a bunch of photos of the object and share them with SAMBA. Then you can load them into meshroom or another software. It's not all in one package.


Yes, this is the current public stage. The device is making a bunch of images, creating a zip file that need to be downloaded and manually processed. The main advantage already is, that the images and the lighting is very consistent, which is major key to successful photogrammetry. The upcoming cloud processing would further simplify this process, but as I am doing all that alone (+ help from the community) this is a rather slow progress...


In that manual, there are several sections. The first three seem to have decent content:

1. Soldering

2. Wiring all parts

3. User Interface

That third one ("User interface") shows how to use the software. From just glancing at it, it seems like it'll be ok.

At the very end there is a list of future things that I'm guessing will also be added to the manual:

4. Workflow

5. Build - 3D Printed Version

6. Build - CNC version

7. Photogrammetry Software - Overview

8. Post Processing

Hopefully these last sections are also filled out in the near future, as they sound pretty useful. :)


What I don't get is why this isn't easy to do with a smartphone. A little motorized turntable with a stand for your phone and a calibration object covered in tracking symbols to use for adjusting the input from the camera. Why is that so impossible?


It's easy to do with a smartphone and you usually don't even need a calibration object. A software like Metashape or Meshroom can infer the lens transform and position of the camera from enough photos of the same object from different angles. The small smartphone optics actually help you by giving you a deeper DoF.

Not the best example but this http://koanlabs.org/vignettes.html rock is ~5 cm tall and my best guess is the minimum detectable feature size is about 10 microns.

EDIT: Sorry to be clear the definition of easy here is "as easy as most photogrammetry", to get good results you will universally do a lot of knob twiddling, waiting and cleaning up broken bits of mesh unless all you do is scan rocks. Scanning rocks is easy.


I don't have expertise in this area, but my experience with other engineering disciplines is that nothing is ever as easy in practice as it is in words, because there are a lot of things that can go wrong or not be compatible with one another

"The devil is in the details" and "talk is cheap" are common expressions for a reason


Basically, lens imperfections, camera noise, vibration and lack of rigid setups (and a hefty dose of etc.).


Actually, this is how I started the project. See for instance this one here (OpenScan in 2019 https://www.instagram.com/p/Bt6JtjEoEM7/)

But I have found that smartphones give very inconsistent results (probably due to image"improving" algorithms...), see https://www.instagram.com/p/B1q1syMoHfG/

So I moved on to the raspberry pi + camera, which is much easier and better to control as you need very, very consistent images...


Interesting. What's the materials cost for the pi-based setup?


100-200€ plus the printed parts (~50€). But most people diy some or even all parts for even less


A few years back I tried an iPhone app (123D Catch from Autodesk) that allowed creating 3D models just by manually moving the phone around an object. No turntable or stand needed.

It worked surprisingly well, but was unfortunately discontinued. Since it was cloud based the app no longer works :(

I wonder if there are any similar apps available now?


> The sample scan provided by Megel to demonstrate the scanner's capabilities took, he claims, less than an hour of wall time and just four clicks of user interaction — though the processing requires the in-beta cloud platform or a more powerful host PC, with the Raspberry Pi unable to provide enough compute itself.

Is there an actual reason for this? Given the Raspberry Pi is a computer too, shouldn't it be able to compute the results as well, only slower than a proper PC?


Photogrammetry is extremely compute-intensive, but it is embarassingly parallel. You need to compute feature points in all your images (which is generally fairly fast, even if you use something like SIFT). Then you have to match features which, in the worst case (non sequential images) is a brute-force N^2 problem where each N (images) might have several thousand potential keypoints. Once you've done that, you do some non-linear optimisation (typically Levenberg Marquadt or some variant) to solve for pairwise camera parameters, orientations and feature point locations. The joint problem is called bundle adjustment and you might have millions of points to solve for. Finally you have to mesh all these points together using I guess something like Delanuey triangulation.

Even on a good desktop processor, it can take hours for a small scene. You really want a GPU for it.


The unstated phrase at the end of the sentence being “to finish in a reasonable amount of time.”

No one wants to wait two weeks while their r-pi converts a point cloud to a mesh.


Are there any projects using OEM LIDAR module with pi?


The title is lying about resolution by a couple orders of magnitude.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: