Hacker News new | past | comments | ask | show | jobs | submit login
How low (power) can you go? (antipope.org)
98 points by cstross on Aug 2, 2012 | hide | past | favorite | 30 comments



A couple of notes about the limits of things that have already been reached today:

A pixel in a camera sensor has to be a couple of wavelengths in size. This is basically the size that we can make them now. There are also diffraction limits for any imaging system, and a size limit on the lens (if your lens is too small, its surface won't collect enough photons in low light to make an image - no matter how you focus them). So I don't think it would be possible to make a deep sub-millimeter camera with usable image quality, and I don't think the 10-order-of-magnitude gains in power efficiency will be coming, either (if you can't shrink it, you're stuck switching larger circuitry and dealing with larger charges). Now, spy cameras can already be annoyingly tiny, but I don't think we're headed for cameras on a speck of dust or some such. One piece of evidence for this is that tiny sub-millimeter insects have lost most or all of their vision - eyes don't scale well to these sizes.

Genome sequencing: I think there are limitations to what can be done with these that are already being hit. E.g. looking at the USB-attached genome reader that Charlie Stross mentions, I see that it really needs to have the material which you're feeding it to be separated into DNA/non-DNA fractions for decent performance (otherwise, the micropores will be deluged by non-DNA material), and it needs a conventional chemistry lab in order to cut something as large as a human genome into manageable chunks. For this you need things like centrifuges, which can't become microscopic. I think there are fundamental limits reached here as well, since you can't make a pore arbitrarily small (molecular-sized) without having all sorts of molecular junk jam into it, if it is present.


Just because there are problems with cameras, don't discount vibrational sensors. (Non-recording acoustics, to get around legal problems.)

People are already putting little processors that can be powered for a few >years< on two AA alkalines on boards with little MEMS microphones. Give them a low-powered way to communicate, and you can do a lot with just sound. Put one next to a piece of continuously operating machinery and you can use Fourier analysis to determine if the operation is normal, or if there's a malfunction. People are already doing this, but the degree to which these systems are getting more mobile, remote, and maintenance-free is amazing.

Create 2mm square "smart glitter" that communicate through low-powered radios to radios mounted on power and telephone lines, and you could have a system that can recognize the spectral signature of a scream, gunfire, a car accident, or the sounds of interpersonal violence, localize it with GPS and communicate it to dispatchers immediately.

And yes, the potential for evil with such devices is tremendous as well.

http://en.wikipedia.org/wiki/A_Deepness_in_the_Sky


Sensors could manage with some sort of interferometry to rebuild an image from multiple pixels across multiple sensors. The signal processing would be similar to what is used for medical scanners. We could also see a convergence with radar, combining those pixels and algorithms with an active source.


Only if they can record phase information and have some way of determining the overlap of their visual fields.

Recording sufficiently accurate phase information for visual wavelengths is really hard.


He was perhaps over enthusiastic with the sequencing aspect and while I don't think it will come about as he describes (for the reasons you give), I expect some method of real time ecosystem analysis to exist.

Your point on camera sensor size is well made. But compressed sensing will allow us to get much lower energy usage and exploited redundancies at local nodes in the network will allow us to do well at otherwise impractical sizes.


I am not sure what you mean by compressed sensing - if you're only interested information such as "child-sized object moving rapidly across field of view", then you can get away with smaller sensors, but here, CS is also talking about faces being recognized in a crowd. That's going to require a relatively bulky camera system, just because you need to collect a large amount of photons (which requires an aperture/lens of certain size), and record them on a sensor which can decide which direction they came from (also needs a certain size). This is the reason why flies and bees have eyes that are (small) webcam-sized, and they probably can't pick out facial features, even if they cared to or had the brain to process them.

For the sequencing, I am really a lot less sure what's ultimately possible, but I wanted to call attention to possible overhyping. If you read the manufacturer's advertisement for the USB genome scanner, it starts out by saying that it accepts a drop of saliva or blood and starts producing a genome. Then it sort of segues into saying that you'd want to separate out the chromosome which you want sequenced (which involves centrifuging the cells to separate the chromosomes by size), and cut that chromosome into manageable pieces (wet chemistry, more centrifuging), and then it will read most of that chromosome in its 6-hour lifetime, but of course you'll only have piecewise sequences to put together. Uh, that was not what you said in the beginning, and I have a feeling there are more gotchas - this is, after all, still just the advertisement.


A good explanation of compressed sensing: http://terrytao.wordpress.com/2007/04/13/compressed-sensing-.... I don't see it working for facial recognition as is but who knows what will happen when you combine this, clever linear coding, local networks and ubiquitous computing. Faces are highly compressible wrt recognition (we do it quickly at poor conditions). I have little doubt that certain timings, exploiting patterns in movements and clever arrangements of the devices will allow tiny, energy efficient cameras to recognize even faces.

For sequencing, I agree. I don't see having a bunch of tiny sequencers auto sequencing everything to be particularly feasible or leading to anything like what he mentions. But then again, we are both thinking linearly when the tech is improving 'exponentially'. It is possible that sequencers will be chemical computers or DNA based or who knows? Current preprocessing limitations and seperability difficulties could turn out to be one of those "I can't believe they used to think that was hard!" moments.

Regardless, the scenario itself, that of being able to monitor vastly more data at a biological and ecosystem level is almost certain. And that is what is important - not nitpicking the details of implementation.

What happens to society when you have sensors and substantial computing power everywhere? This is one of the scenarios in Vinge's singularity...


> I am not sure what you mean by compressed sensing

Google "compressed sensing" Add "site:stanford.edu" for some focus.

> faces being recognized in a crowd. That's going to require a relatively bulky camera system, just because you need to collect a large amount of photons (which requires an aperture/lens of certain size

Turns out that you don't.

See Candes talk on http://www.stanford.edu/class/ee380/ay1011.html and Baron's talk on http://www.stanford.edu/class/ee380/ay0607.html


I'm not sure what this comment is supposed to add. Maybe to say that you don't need to resolve an image very well to detect a face?

In any event, the resolution of your imaging system is fundamentally limited by it's aperture. That is a limitation dictated by physics. You can play fun games with synthetic aperture provided that you can recover the phase of the incoming radiation, but recovering the phase of 700nm radiation is really hard to do. There seems to be a misconception that a single-pixel detection system is somehow immune to diffraction, but this is simply not the case.


So how about finding the chromosomes using some sort of molecular sieve? Or thousands and thousands of them, if they don't work very well.

I'm asking that from a completely naive point of view, but it seems to me that there is a good chance of completely new methods being created, not just the scaling down of existing methods.


I think there's a great potential in merging biological with a traditional digital/electronic interface when it comes to genome sequencing. Our white blood cells already does a good job of recognizing cells, as do viruses etc -- I'm confident we'll see virus/RNA/DNA-derived sequencing before too long.

If nothing else, by looking at nature, we can see some practical examples of how far it should be possible to take this technology.


Theoretically, a single pixel camera that scanned fast enough and held the result in memory could create an image with many pixels, so it seems like this barrier isn't necessarily absolute.


But how well is that going to work in dim light? You're going to hit a limit to how fast you can scan given the time it takes to gather enough photons. When you start to have to be too clever is when you've started to hit limits.


That's about two decades ago, then.


None of that was too clever, obviously.


There is still a limit on lens size that is very hard to work around.. i think that by scanning, a camera like this could function as a pinhole lens, but the effective lens size would only be the size of the single pixel.


Either processors could share cameras, or each processor could have a low resolution camera that works in aggregate, perhaps similar to how a fly's eyes work. Or we could have both. Computer vision doesn't have to replicate human vision to be useful, especially if the processing is fast enough.


I've occasionally mulled what a 1-bit camera could do, in the same notion as a 1-bit audio A/DC. Cross this with CS's gazillion millimeter computers and emerging 3D image extraction (term escapes me, take lots of photos and deduce the source objects a la Google 3D Maps).

One camera may be too small for an image (both resolution & color depth). A lot of them, thrown randomly around, may provide interesting results given enough math.


See http://dsp.rice.edu/cscamera for an almost practical single pixel camera.


Nobody mentioned that Moore's first law isn't just about the number of transistors doubling. It's about the number in the chip that's cheaper per transistor.

Also there's Moore's second law about the cost of fabs rising exponentially at about the same rate. If GDP growth doesn't accelerate dramatically, we won't be able to afford fabs any longer in a few decades.


I think the assumption that Moore's law, or even Koomey's law, is going to hold for another two decades is wildly optimistic. I would expect a taper off much sooner, perhaps even this decade.

The physics and the economics of smaller process nodes is becoming harder and harder, to the point where anything beyond 10nm or so looks extremely difficult.


I don't. The slowing of the pace of innovation has been claimed for centuries but has yet to happen.

Switching architectures, materials, moving to atomic scale - there is still plenty of room at the bottom. I'm excited for memrisotor tech myself.



That graph ignores that there will be pressure to go to smaller sizes not to get more power for the same size, but to get the same power for less power usage.

On the other hand, I would like to see a graph plotting time or nanometers vs cost of the factory needed to make them. I cannot find one, but I remember seeing ones that hinted that the cost of making a production line would become prohibitive for everyone.


Stross' comparison between Cray 1 performance and that of a modern smartphone seems off. He says: "A regular ARM-powered smartphone, such as an iPhone 4S, is some 12-13 orders of magnitude more powerful as a computing device than a late 1970s-vintage Cray 1 supercomputer."

A Cray 1 could peak at about 250 MFLOPs/s (http://en.wikipedia.org/wiki/Cray-1), and a modern smart phone like the Galaxy Nexus peaks at about 9.6 GFLOPs/s (using ARM Neon instructions on both cores). That's less than two orders of magnitude difference.

Floating-point power efficiency seems to have improved by about 6-7 orders of magnitude in that time though, which is very nice :)


Maybe he's comparing integer operations? ARM has relatively underpowered floating-point because most of the uses for an ARM (at least historically) don't involve it.


He notes it is a mistake in the comments.

But the details of the comparison don't change the part where a modern cell phone is nicely comparable to a $millions computer from a few decades ago.


Apparently, Charlie hasn't gotten the memo: http://www.semiwiki.com/forum/content/1388-scariest-graph-i-...

These are financial limits, the scaling limits will end at about 1-3 nm. But it will take longer, as doubling will no longer in constant time.


Well, that does imply that we could keep making the same processors at smaller sizes for the same price. Power density issues might constrain speeding things up at that point, but it would certainly be a continuation of Koomey's law.

EDIT: Maybe more likely is that we might be seeing a future slowdown of Moore's law to a doubling every 24 months or more, allowing fabs more time to amortize their investments. This would be bad, but not the end of the world.


fluorescent LEDs




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: