Hacker News new | past | comments | ask | show | jobs | submit login

A couple of notes about the limits of things that have already been reached today:

A pixel in a camera sensor has to be a couple of wavelengths in size. This is basically the size that we can make them now. There are also diffraction limits for any imaging system, and a size limit on the lens (if your lens is too small, its surface won't collect enough photons in low light to make an image - no matter how you focus them). So I don't think it would be possible to make a deep sub-millimeter camera with usable image quality, and I don't think the 10-order-of-magnitude gains in power efficiency will be coming, either (if you can't shrink it, you're stuck switching larger circuitry and dealing with larger charges). Now, spy cameras can already be annoyingly tiny, but I don't think we're headed for cameras on a speck of dust or some such. One piece of evidence for this is that tiny sub-millimeter insects have lost most or all of their vision - eyes don't scale well to these sizes.

Genome sequencing: I think there are limitations to what can be done with these that are already being hit. E.g. looking at the USB-attached genome reader that Charlie Stross mentions, I see that it really needs to have the material which you're feeding it to be separated into DNA/non-DNA fractions for decent performance (otherwise, the micropores will be deluged by non-DNA material), and it needs a conventional chemistry lab in order to cut something as large as a human genome into manageable chunks. For this you need things like centrifuges, which can't become microscopic. I think there are fundamental limits reached here as well, since you can't make a pore arbitrarily small (molecular-sized) without having all sorts of molecular junk jam into it, if it is present.




Just because there are problems with cameras, don't discount vibrational sensors. (Non-recording acoustics, to get around legal problems.)

People are already putting little processors that can be powered for a few >years< on two AA alkalines on boards with little MEMS microphones. Give them a low-powered way to communicate, and you can do a lot with just sound. Put one next to a piece of continuously operating machinery and you can use Fourier analysis to determine if the operation is normal, or if there's a malfunction. People are already doing this, but the degree to which these systems are getting more mobile, remote, and maintenance-free is amazing.

Create 2mm square "smart glitter" that communicate through low-powered radios to radios mounted on power and telephone lines, and you could have a system that can recognize the spectral signature of a scream, gunfire, a car accident, or the sounds of interpersonal violence, localize it with GPS and communicate it to dispatchers immediately.

And yes, the potential for evil with such devices is tremendous as well.

http://en.wikipedia.org/wiki/A_Deepness_in_the_Sky


Sensors could manage with some sort of interferometry to rebuild an image from multiple pixels across multiple sensors. The signal processing would be similar to what is used for medical scanners. We could also see a convergence with radar, combining those pixels and algorithms with an active source.


Only if they can record phase information and have some way of determining the overlap of their visual fields.

Recording sufficiently accurate phase information for visual wavelengths is really hard.


He was perhaps over enthusiastic with the sequencing aspect and while I don't think it will come about as he describes (for the reasons you give), I expect some method of real time ecosystem analysis to exist.

Your point on camera sensor size is well made. But compressed sensing will allow us to get much lower energy usage and exploited redundancies at local nodes in the network will allow us to do well at otherwise impractical sizes.


I am not sure what you mean by compressed sensing - if you're only interested information such as "child-sized object moving rapidly across field of view", then you can get away with smaller sensors, but here, CS is also talking about faces being recognized in a crowd. That's going to require a relatively bulky camera system, just because you need to collect a large amount of photons (which requires an aperture/lens of certain size), and record them on a sensor which can decide which direction they came from (also needs a certain size). This is the reason why flies and bees have eyes that are (small) webcam-sized, and they probably can't pick out facial features, even if they cared to or had the brain to process them.

For the sequencing, I am really a lot less sure what's ultimately possible, but I wanted to call attention to possible overhyping. If you read the manufacturer's advertisement for the USB genome scanner, it starts out by saying that it accepts a drop of saliva or blood and starts producing a genome. Then it sort of segues into saying that you'd want to separate out the chromosome which you want sequenced (which involves centrifuging the cells to separate the chromosomes by size), and cut that chromosome into manageable pieces (wet chemistry, more centrifuging), and then it will read most of that chromosome in its 6-hour lifetime, but of course you'll only have piecewise sequences to put together. Uh, that was not what you said in the beginning, and I have a feeling there are more gotchas - this is, after all, still just the advertisement.


A good explanation of compressed sensing: http://terrytao.wordpress.com/2007/04/13/compressed-sensing-.... I don't see it working for facial recognition as is but who knows what will happen when you combine this, clever linear coding, local networks and ubiquitous computing. Faces are highly compressible wrt recognition (we do it quickly at poor conditions). I have little doubt that certain timings, exploiting patterns in movements and clever arrangements of the devices will allow tiny, energy efficient cameras to recognize even faces.

For sequencing, I agree. I don't see having a bunch of tiny sequencers auto sequencing everything to be particularly feasible or leading to anything like what he mentions. But then again, we are both thinking linearly when the tech is improving 'exponentially'. It is possible that sequencers will be chemical computers or DNA based or who knows? Current preprocessing limitations and seperability difficulties could turn out to be one of those "I can't believe they used to think that was hard!" moments.

Regardless, the scenario itself, that of being able to monitor vastly more data at a biological and ecosystem level is almost certain. And that is what is important - not nitpicking the details of implementation.

What happens to society when you have sensors and substantial computing power everywhere? This is one of the scenarios in Vinge's singularity...


> I am not sure what you mean by compressed sensing

Google "compressed sensing" Add "site:stanford.edu" for some focus.

> faces being recognized in a crowd. That's going to require a relatively bulky camera system, just because you need to collect a large amount of photons (which requires an aperture/lens of certain size

Turns out that you don't.

See Candes talk on http://www.stanford.edu/class/ee380/ay1011.html and Baron's talk on http://www.stanford.edu/class/ee380/ay0607.html


I'm not sure what this comment is supposed to add. Maybe to say that you don't need to resolve an image very well to detect a face?

In any event, the resolution of your imaging system is fundamentally limited by it's aperture. That is a limitation dictated by physics. You can play fun games with synthetic aperture provided that you can recover the phase of the incoming radiation, but recovering the phase of 700nm radiation is really hard to do. There seems to be a misconception that a single-pixel detection system is somehow immune to diffraction, but this is simply not the case.


So how about finding the chromosomes using some sort of molecular sieve? Or thousands and thousands of them, if they don't work very well.

I'm asking that from a completely naive point of view, but it seems to me that there is a good chance of completely new methods being created, not just the scaling down of existing methods.


I think there's a great potential in merging biological with a traditional digital/electronic interface when it comes to genome sequencing. Our white blood cells already does a good job of recognizing cells, as do viruses etc -- I'm confident we'll see virus/RNA/DNA-derived sequencing before too long.

If nothing else, by looking at nature, we can see some practical examples of how far it should be possible to take this technology.


Theoretically, a single pixel camera that scanned fast enough and held the result in memory could create an image with many pixels, so it seems like this barrier isn't necessarily absolute.


But how well is that going to work in dim light? You're going to hit a limit to how fast you can scan given the time it takes to gather enough photons. When you start to have to be too clever is when you've started to hit limits.


That's about two decades ago, then.


None of that was too clever, obviously.


There is still a limit on lens size that is very hard to work around.. i think that by scanning, a camera like this could function as a pinhole lens, but the effective lens size would only be the size of the single pixel.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: