
Stanford engineers build computer using carbon nanotube technology - jonbaer
http://phys.org/news/2013-09-stanford-carbon-nanotube-technology.html
======
ColinWright
Each report has its own take on the topic - here are a few other HN
submissions:

[https://news.ycombinator.com/item?id=6447714](https://news.ycombinator.com/item?id=6447714)

    
    
      First computer made of nanotubes unveiled
      (bbc.co.uk)
    

[https://news.ycombinator.com/item?id=6447669](https://news.ycombinator.com/item?id=6447669)

    
    
      Processor made from carbon nanotubes runs multitasking OS
      (arstechnica.com)
    

[https://news.ycombinator.com/item?id=6447583](https://news.ycombinator.com/item?id=6447583)

    
    
      First computer made from carbon nanotubes debuts
      (ieee.org)
    

[https://news.ycombinator.com/item?id=6447227](https://news.ycombinator.com/item?id=6447227)

    
    
      Researchers Build a Working Carbon Nanotube Computer
      (nytimes.com)
    

[https://news.ycombinator.com/item?id=6446731](https://news.ycombinator.com/item?id=6446731)

    
    
      World's first carbon nanochip computer, comparable to 4004
      (technologyreview.com)
    

[https://news.ycombinator.com/item?id=6446258](https://news.ycombinator.com/item?id=6446258)

    
    
      Breakthrough in Carbon Nantotube Computing Could 'Save' Moore's Law
      (pcmag.com)
    

[https://news.ycombinator.com/item?id=6446008](https://news.ycombinator.com/item?id=6446008)

    
    
      First Computer Made From Carbon Nanotubes Debuts
      (ieee.org)
    

A few currently have any comments, a few have an upvote or two.

~~~
igravious
Surely those (ieee.org) articles are dupes.

For once we have an announcement that isn't about some advance that might lead
to something 5 years down the line. They built an actual computer. Wow.

------
moocowduckquack
The method for getting rid of the metal ones is stunningly simple. Presumably
you could use the same trick to blow stuff like encryption keys or specific
finite state machines into the actual chip.

------
joe_the_user
Hmm,

Over the last forty years, a Moore's Law of processor speed, transistors per
chip, information storage and I assume other things has operated [1].

Moore's Law of processor speed has definitely broken down in the last ten
years and I assume that this research is attempting to address this
fundamental limitation. I believe Moore's Law of transistors per chip is still
here but that without increasing speed, this tendency is nowhere as useful (we
don't want a whole lot of slow cores, we _want_ a few fast cores).

Moore's Law of data storage is still here but it's also not as useful without
a similar exponential increase in system data throughput [2].

[1]
[http://en.wikipedia.org/wiki/Moore%27s_law](http://en.wikipedia.org/wiki/Moore%27s_law)
[2]
[http://en.wikipedia.org/wiki/Throughput](http://en.wikipedia.org/wiki/Throughput)

~~~
breckinloggins
I have a prediction about where this will go:

First, things will continue about the way they are now: cores will stay at the
same speed or get slower but there will be more of them.

The industry will stay this way for at least the next 5 to 10 years, and
during that time we are going to get better and better figuring out how to
take advantage of the increasing number of cores.

Finally, at some point in the future (I predict between 10 and 15 years), we
are going to see an _explosion_ in materials science that allows us to
suddenly jump from 3 - 5GHz / core to 400 GHz - 5 THz per core (with even more
cores than we have now and dramatically lower power consumption).

At this point we will be nicely prepared to exploit _both_ the parallelism and
the return of raw speed gains. We'll also see completely different
architectures like Chuck Moore's greenArrays, quantum co-processors,
"memory/compute fabrics" like memristor crossbar arrays, and further
advancements in the GPU/APU area. These changes will continue to challenge our
assumptions about what are the "proper" paradigms, languages, and
architectures for computer programs.

We've made great progress already. I can think of myself as an example: in
2005 I, like most of the programmers here, read Herb Sutter's essay _The Free
Lunch Is Over_ [1] and realized that he was correct: the era of easy speedups
was giving way to the era of concurrency. And programmers were going to have
to figure out how to take advantage of that concurrency or an app in 2013 was
going to run at about the same speed as an app in 2005.

But I had _no idea_ how we were going to do that. I'd used threads and mutexes
and locks in my professional work so I knew that correct concurrency was hard
(and correctness almost never survived past a few rounds of maintenance). But
I didn't have the first clue how we were going to get out of this mess without
all becoming experts in Category Theory.

Fast forward to today and I find myself using things like "lambda
expressions", "functional closures", "higher order functions", "map/reduce",
"filters", and even "monads" and, you know what, they now seem as natural to
me as the for-loop.

These days, I'm much more likely to write:

    
    
        stuff.select {|l| l > 23}.map(&:increment_magically).sum
    

than I am to write the equivalent imperative loop. I'm no genius, so I
consider it remarkable that I've been able to digest these concepts as fast as
I did.

And I am willing to bet that many other programmers here feel the same way.
Not only that, but we are beginning to _really understand_ things like:

\- Identity vs value

\- The benefits and dangers of mutable state

\- Impure vs pure functions

\- Software Transactional Memory

\- MVCC / persistent data structures

\- Even Category Theory :)

None of this stuff was even remotely on my radar 8 years ago, and now it's
part of our professional lives. We may not get to _use_ it all yet, but we can
use more and more of it on every project.

And we are seeing the benefits.

Now fast forward to a time when we not only have hundreds or thousands of
cores, but they suddenly jump up to 500 GHz. I guarantee you none of these
skills will have been for nought.

[1] [http://www.gotw.ca/publications/concurrency-
ddj.htm](http://www.gotw.ca/publications/concurrency-ddj.htm)

~~~
coldtea
> _Finally, at some point in the future (I predict between 10 and 15 years),
> we are going to see an explosion in materials science that allows us to
> suddenly jump from 3 - 5GHz / core to 400 GHz - 5 THz per core (with even
> more cores than we have now and dramatically lower power consumption)._

I don't see this happening. There are some physical limitations to getting to
that.

------
Nanomedicine
The just add certain amount of CNT transistors together. There is nothing
worth for nature... I think a paper of Graphene transistors "computer" will
publish on nature recently....

------
username42
"computer with 178 transistors". What is the minimum number of transistors to
be called a computer ?

