Would it be good to have denser transistor counts? Sure. There are lots of benefits for hardware. Denser transistors can mean (possibly) reduced power consumption from smaller chips. Manufacturing costs are reduced because you get more chips from the same materials. Device production costs are reduced because you can combine functions that used to take two chips into a single chip. Overall devices can be smaller.
But from a software perspective, its a limited blessing. To be sure, there are benefits for cloud computing applications and other "embarrassingly parallel" applications. These chips might be great in data centers, which are power sensitive. And being able to do more things at once can be good, even if its at the same speed - Google can maybe crawl the web more frequently and video games might get improved graphics.
But the thing is, we already have nearly unlimited computing power at our disposal. On a whim, I could spin up a hundred thousand computers on Amazon Web Services to do some desired computation (which would cost a mere $700/hr at the current spot prices). My laptop has a 192 core GPU which, by itself, is more powerful than most super computers that existed up to the mid-1990s.
What we really need are software advances to exploit this hardware. For decades, there has been essentially no progress whatever in software that can take advantage of parallel computing. What techniques we have are either brittle (threads) or forfeit most of the hardware's power (multiple processes combined with relatively slow inter-process communication). Software is in such a backward state that people are amazed when the home-screen icons on an iphone scroll smoothly from page to page.
Also, it would be useful, if mobile phones would feature such a computational power, so that strong AI software with above human intelligence could run on it. Such software could technically run on a C64 if enough memory were available, but answers would take quite a long time.
So, I really hope that we will see maaany trillion times more computational power in this century.
Someone else has to do the Software hacking. But if there isn't someone doing it (or at least meeting your expectations), then it doesn't mean IBM should stop or change activity.
Any progress we make is a win for us and for the whole humanity.
In HPC, people can always use more computing power to improve their simulations or numerical solvers, and we are nowhere close to hitting a limit in usefulness (think e.g. faster/more detailed simulation of large scale structures, say airflow around aircrafts, n-body systems, simulation of bigger quantum systems in physics ...).
While massive parallelization is of course used wherever possible, it's sometimes more difficult to come up with algorithms that scale, and sometimes you simply have to much dependency between your data to parallelize beyound a certain point. This is the place where this stuff could be used first, before it's ready to "trickle down" to consumer electronics.
tldr; we are ok, and we're going to be ok.
The amazing thing about nanotubes is that they have an electron mobility of 100,000 cm2/(V·s) at room temperature, the highest of anything you could think about making a transistor out of. By contrast, silicon has an electron mobility of only 1400 cm2/ (V·s). Since the drive current coming out of transistor is roughly proportional to electron mobilty (or equally high hole mobility) you'd expect that you'd expect that a nanotube based transistor would be able to switch 70 times as fast as a silicon based one. Or a quarter of a terrahertz, in other words. Now, speed of light delays at 90nm will probably be pretty significant at that point, but we should still expect that moving to nanotubes to be a huge win for single threaded performance at the expense of the number of threads available.
A transistor switching at 250GHz would not be 70 times faster than a silicon transistor. In fact, it would only be a little over 2x faster. Our best production quality silicon transistors are well over 100GHz now.
The clock frequency of a CPU is not the speed that a single transistor in that CPU switches. It's the speed that the longest path of transistors inside a single pipeline stage in that CPU switches. In contemporary CPUs, these paths are ~20 FO4¹ long.
(¹ since the speed of the transistors depend on the load put on them, you need to normalize for that. http://en.wikipedia.org/wiki/FO4 )
Unless you were talking about clock skew between the exit points of your h-tree?
An announcement like this is great but it is no reason to get overly excited just yet. Many other technologies have been proposed and have either disappeared or have found employment in niches (GaAs for instance), for now Silicium still holds a formidable edge when it comes to the most important measure of all: economy.
also more informative stories