Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Actually you are sort of wrong in both cases. Because a) drone is moving; b) drone can take multiple pictures; c) multiple drones can take pictures simultaneously.

In case of multiple drones taking pictures, these can be synchronized using GPS time, down to a microsecond. And knowing the GPS coordinates of each drone you can recover the image. Pretty much with arbitrary quality. (& If you are really good at it and the object is stationary, you can do it even through heavy fog. I'm not kidding, you really can - I can point you to some fancy research in computer vision, that allows you to do just that.).



>I can point you to some fancy research in computer vision, that allows you to do just that.

Please do


Oh. I've tried finding that exact publication - they took a video of a city scene with dense fog/clouds in the background. If you'd play the video you couldn't see any shapes through the fog. At all. But after passing the whole video through reconstruction/de-noising, they've been able to recover the background (you could see s mountain, instead of dense fog).

It is not surprising and it's a pretty standard result actually. Can be called by different names (sparse sensing, denoising, superresolution, ...) and done by different methods (BP, LASSO, ...), but it always comes down to having a model (with some priors, e.g sparsity prior!) and using that model to re-project the data from your measurement projection into the projection of some interest to you.

I have no idea at what level you are, so I'd suggest this very well presented lecture (1 hour total): Compressed Sensing by Terence Tao http://www.youtube.com/playlist?list=PLC94A02A1218B24DF if you want to learn more. Examples of image recovery are in the part 5, part 6.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: