Hacker News new | past | comments | ask | show | jobs | submit login
World’s first programmable nanoprocessor (seas.harvard.edu)
57 points by potomak on Feb 11, 2011 | hide | past | favorite | 7 comments



>"This work represents a quantum jump forward in the complexity and function of circuits built from the bottom up, and thus demonstrates that this bottom-up paradigm, which is distinct from the way commercial circuits are built today, can yield nanoprocessors and other integrated systems of the future,”

Define "bottom up", please. I highly doubt that chip manufacturers haven't been trying extremely hard to make smaller chips, so I'm forced to assume they mean something else. In the meantime, of course chip manufacturers are focusing on increasing macro-complexity, because that's where the major speed increases have been for quite a while (SSE, predictive branching, hyperthreading, multi-processor, etc)


This is my own musings, so bear that in mind. But. One of the goals of nanotechnology is to build things on an "atom by atom" basis. One of the objections to this is that it is energetically infeasible; building things "atom by atom" involves a certain amount of energy required for the sheer basic chemistry involved, and the waste energy would be enough to fry whatever it is you're trying to build.

The obvious solution to that is to do what we programmers already do, which is divide and conquer. Instead of building atom-by-atom, you grab two very small bars of the target substance, stick them together (heat penalty), hand them up to the next level which sticks two such other bars together, hand them up to the next level, etc. Just as one example.

This sounds like another. If you can build tiles of circuitry and put them together, then you can stick two 2x1 tiles together, then stick two 2x2 tiles together, and bootstrap your way up to a macroscopic result in a reasonable time. As opposed to the current monolithic (pun intended) top-down approach to circuitry design. Ultimately it looks something more like middle-out design.

So, I'm interpreting this as an announcement that they've created a microscopic component of interest that could be used to assemble something yet more powerful in a reasonable number of steps. Big news, really, though still just one thing in a stream.



>Bottom-up approaches seek to have smaller (usually molecular) components built up into more complex assemblies, while top-down approaches seek to create nanoscale devices by using larger, externally-controlled ones to direct their assembly.

Absolutely perfect, thanks :) I never knew there were precise uses of these terms, they've always seemed overly-general to me.


So it's not clear just what the advance here actually is. Just how is this new technique different from the old ones? Could we at least get a size comparison against current processors?


Yes, but can they _replicate_? If so, we're doomed! http://en.wikipedia.org/wiki/Gray_goo





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: