A pixel in a camera sensor has to be a couple of wavelengths in size. This is basically the size that we can make them now. There are also diffraction limits for any imaging system, and a size limit on the lens (if your lens is too small, its surface won't collect enough photons in low light to make an image - no matter how you focus them). So I don't think it would be possible to make a deep sub-millimeter camera with usable image quality, and I don't think the 10-order-of-magnitude gains in power efficiency will be coming, either (if you can't shrink it, you're stuck switching larger circuitry and dealing with larger charges). Now, spy cameras can already be annoyingly tiny, but I don't think we're headed for cameras on a speck of dust or some such. One piece of evidence for this is that tiny sub-millimeter insects have lost most or all of their vision - eyes don't scale well to these sizes.
Genome sequencing: I think there are limitations to what can be done with these that are already being hit. E.g. looking at the USB-attached genome reader that Charlie Stross mentions, I see that it really needs to have the material which you're feeding it to be separated into DNA/non-DNA fractions for decent performance (otherwise, the micropores will be deluged by non-DNA material), and it needs a conventional chemistry lab in order to cut something as large as a human genome into manageable chunks. For this you need things like centrifuges, which can't become microscopic. I think there are fundamental limits reached here as well, since you can't make a pore arbitrarily small (molecular-sized) without having all sorts of molecular junk jam into it, if it is present.
People are already putting little processors that can be powered for a few >years< on two AA alkalines on boards with little MEMS microphones. Give them a low-powered way to communicate, and you can do a lot with just sound. Put one next to a piece of continuously operating machinery and you can use Fourier analysis to determine if the operation is normal, or if there's a malfunction. People are already doing this, but the degree to which these systems are getting more mobile, remote, and maintenance-free is amazing.
Create 2mm square "smart glitter" that communicate through low-powered radios to radios mounted on power and telephone lines, and you could have a system that can recognize the spectral signature of a scream, gunfire, a car accident, or the sounds of interpersonal violence, localize it with GPS and communicate it to dispatchers immediately.
And yes, the potential for evil with such devices is tremendous as well.
Recording sufficiently accurate phase information for visual wavelengths is really hard.
Your point on camera sensor size is well made. But compressed sensing will allow us to get much lower energy usage and exploited redundancies at local nodes in the network will allow us to do well at otherwise impractical sizes.
For the sequencing, I am really a lot less sure what's ultimately possible, but I wanted to call attention to possible overhyping. If you read the manufacturer's advertisement for the USB genome scanner, it starts out by saying that it accepts a drop of saliva or blood and starts producing a genome. Then it sort of segues into saying that you'd want to separate out the chromosome which you want sequenced (which involves centrifuging the cells to separate the chromosomes by size), and cut that chromosome into manageable pieces (wet chemistry, more centrifuging), and then it will read most of that chromosome in its 6-hour lifetime, but of course you'll only have piecewise sequences to put together. Uh, that was not what you said in the beginning, and I have a feeling there are more gotchas - this is, after all, still just the advertisement.
For sequencing, I agree. I don't see having a bunch of tiny sequencers auto sequencing everything to be particularly feasible or leading to anything like what he mentions. But then again, we are both thinking linearly when the tech is improving 'exponentially'. It is possible that sequencers will be chemical computers or DNA based or who knows? Current preprocessing limitations and seperability difficulties could turn out to be one of those "I can't believe they used to think that was hard!" moments.
Regardless, the scenario itself, that of being able to monitor vastly more data at a biological and ecosystem level is almost certain. And that is what is important - not nitpicking the details of implementation.
What happens to society when you have sensors and substantial computing power everywhere? This is one of the scenarios in Vinge's singularity...
Google "compressed sensing" Add "site:stanford.edu" for some focus.
> faces being recognized in a crowd. That's going to require a relatively bulky camera system, just because you need to collect a large amount of photons (which requires an aperture/lens of certain size
Turns out that you don't.
See Candes talk on http://www.stanford.edu/class/ee380/ay1011.html
and Baron's talk
In any event, the resolution of your imaging system is fundamentally limited by it's aperture. That is a limitation dictated by physics. You can play fun games with synthetic aperture provided that you can recover the phase of the incoming radiation, but recovering the phase of 700nm radiation is really hard to do. There seems to be a misconception that a single-pixel detection system is somehow immune to diffraction, but this is simply not the case.
I'm asking that from a completely naive point of view, but it seems to me that there is a good chance of completely new methods being created, not just the scaling down of existing methods.
If nothing else, by looking at nature, we can see some practical examples of how far it should be possible to take this technology.
One camera may be too small for an image (both resolution & color depth). A lot of them, thrown randomly around, may provide interesting results given enough math.
Also there's Moore's second law about the cost of fabs rising exponentially at about the same rate. If GDP growth doesn't accelerate dramatically, we won't be able to afford fabs any longer in a few decades.
The physics and the economics of smaller process nodes is becoming harder and harder, to the point where anything beyond 10nm or so looks extremely difficult.
Switching architectures, materials, moving to atomic scale - there is still plenty of room at the bottom. I'm excited for memrisotor tech myself.
On the other hand, I would like to see a graph plotting time or nanometers vs cost of the factory needed to make them. I cannot find one, but I remember seeing ones that hinted that the cost of making a production line would become prohibitive for everyone.
A Cray 1 could peak at about 250 MFLOPs/s (http://en.wikipedia.org/wiki/Cray-1), and a modern smart phone like the Galaxy Nexus peaks at about 9.6 GFLOPs/s (using ARM Neon instructions on both cores). That's less than two orders of magnitude difference.
Floating-point power efficiency seems to have improved by about 6-7 orders of magnitude in that time though, which is very nice :)
But the details of the comparison don't change the part where a modern cell phone is nicely comparable to a $millions computer from a few decades ago.
These are financial limits, the scaling limits will end at about 1-3 nm. But it will take longer, as doubling will no longer in constant time.
EDIT: Maybe more likely is that we might be seeing a future slowdown of Moore's law to a doubling every 24 months or more, allowing fabs more time to amortize their investments. This would be bad, but not the end of the world.