This press release is equivalent to "Scientists Cures Diabetes in Mice" - a breakthrough that happens about a half dozen times a year but has still yet to make it from the lab to the FDA.
The timing of this press release is entirely to boost investor confidence in IBM and GlobalFoundries given Intel's recent announcement of delays at the 10nm process node.
The Ars article is vastly better than the above link: http://arstechnica.co.uk/gadgets/2015/07/ibm-unveils-industr...
Chip manufacturers like Intel and IBM have regularly made good on promises of exponential progress for at least a half century. Comparing them to press release-pushing biomedical researchers is tantamount to a slur.
> Comparing them [chip manufacturers] to press release-pushing biomedical researchers is tantamount to a slur.
No, it isn't. Slower progress in biomedical research isn't a result of biomedical researchers exhibiting any of the qualities whose unwarranted attribution normally constitutes slur. It is the result of much greater complexity, lower predictability, higher safety requirements and weaker human understanding of biological systems compared to semiconductors.
If third parties ("PR") hijack the truth, it is up to the researcher publicly to denounce them.
If, as I suspect, such denunciation is bad for a researcher's funding, then we have a problem in research, if indeed, in such circumstances, it can even be called research (as opposed to, say, "marketing").
Clearly biotech is a younger field than semiconductors, and it should be given a wide berth to make mistakes without prejudice, but that does not exonerate it from explicitly communicating the expected uncertainty of its results.
Most of the blame lies with the scientific press, but the researchers don't seem to mind it all that much either. Misleading or overly-optimistic press releases written by university personal are also the source of much of it.
Whatever it takes to get those sweet, sweet grant dollars
Let's not forget that the chip in the Z series mainframes is the fastest commercial piece of silicon ever produced, and the high end Power8 chips handily outrun top-of-the-range 18-core Xeons on a number of benchmarks (though at worse power envelopes). (http://www.anandtech.com/show/9193/the-xeon-e78800-v3-review...).
Turning technology into something that can me manufactured and sold was something was something definetely on the mind of research. IBM was spending 6 Billion a year on research and they were looking for more results out of it.
They knew that an discovery/invention was good, but one that could be brought to market was better. The licensed a lot of their tech to the chip machine manufacturers if I remember correctly. Plus back in those days IBM had chip making facilities.
And moore's law probably won't return to life, until we learn how to solve that problem, which the current work doesn't help with.
The CEO of Applied Materials, Gary Dickerson, has stated that the 450mm wafer timeline “has definitely been pushed out from a timing standpoint.” That’s incredibly important, because the economics of 450mm wafers were tied directly to the economics of another struggling technology — EUV. EUV is the follow-up to 193nm lithography that’s used for etching wafers, but it’s a technology that’s spent over a decade mired in technological problems and major ramp-up concerns.
Toasting to the death of Moore's Law:
A few teething problems would be expected.
7nm has been struggling for a while, 5nm is likely to be late, and I don't think anyone really knows what happens after that.
Longer term, industrial manufacture is probably going to have to move to something exotic like nano-assembly of individual atoms, with some extra finagling to work around tunnelling effects. (Easier said than done...)
The average person uses PCs and mobile devices to browse the web, write documents, order an Uber, and maybe play games. Nothing much is being done on the software front that challenges current systems. Maybe if VR took off or we got home applications for AI like domestic robotics that would change. I could see a domestic robot capable of folding clothes, cleaning up, etc. needing a low power chip that can do what a dual-12-core Xeon can do on smart phone power and thermal profiles. <5nm might be needed to accomplish that.
I'm not sure server and high-end compute demand is sufficient to pay for the R&D that would be required to far beyond 7nm.
But the good news is that we haven't even scratched the surface of what current systems could theoretically accomplish. Look into the demo scene and prepare to be blown away by what 8-bit CPUs in the 1980s could accomplish with non-crap code running on them. Maybe we need a software Moore's Law to take over for the hardware one -- right now software has more of an erooM's Law.
One thing is clear: if you do software, ball's in your court either way. Either you need to invent killer apps to keep demand high for high performance computers -- things that really need that much power -- or you need to take over for the hardware people and start finding new efficiencies.
Ball's really been in software's court for a while anyway with multi-core... linear performance max'd out (for consumer chips) a while ago.
1. AI - variety of applications, both for consumer and business markets.
2. Telepresence. If we can get the real feeling of "being there" to telepresence, at a price point that's attractive for the consumer.
3. Simulation. Currently it's a complex process ,mostly done by experts. If it can be a tool for regular engineers , and maybe further down the road - for combining that with some sort of genetic-algorithms , maybe there's potential for a huge demand increase.
I wonder if they could use electrons instead of light to etch the surface.
It has a great graph of engineering effort vs Moore's law which made it cheaper to just wait for a faster chip then put in the effort.
Then imagine how much performance you could get out of OS level VMs that understand the processes at VM level (ie. can access code in some IR that they can analyze easily, recompile it on the fly, etc.) there is already stuff like this in specialized markets (eg. kernel level GC for JVM) but it's still fairly specific.
Then there's all the shitty legacy abstraction layers in things like filesystems - ZFS is a perfect example of what kind of gains you can get for free if you just rethink the design decisions behind current stack and see what applies and what doesn't.
If the benefit of rewriting these systems ever overcomes the cost - we have huge potential areas for performance gains, modern systems are very far from being performance efficient, they are efficient based on various other factors (development cost, compatibility, etc.)
I also wish ZFS would grow an encryption layer (one that isn't based on Sunacle's implementation, since Sunacle doesn't want to share that one thus no one can use it).
Compare that to some of the code people ran through 6502-derivatives.
Abstraction may be more efficient in terms of programmer time, and performance efficiency may be high enough so as to be immaterial, but the two shouldn't be conflated.
The older I get, the more I start seeing over-complexity in stacks as a security risk as well. I feel like there's a fundamental maximum to the number of levels of abstraction one can keep in one's head "enough" to avoid creating layer interaction bugs. Stack overflow, indeed. :)
There are other materials to make chips out of besides silicon, gallium arsenide and carbon for instance, each of which has different scaling properties.
There's also ways to make chips more dense by stacking wafers instead of trying to shrink features.
Stacking is certainly a thing and it's good for memory (see AMD's newest graphics card) but power dissipation provides limits in terms of how much high speed logic you can put under a given area.
Well, incremental lab improvements of this and that technique do make it into practice all the time. The failure of various biological researches is symptom of some fundamental brokenness or inherent hardness to biological research (biological systems are inherently messy - the ability of biologists to work with uniform, mass-produced mice is actually a hindrance when they try to apply those researches to non-uniform humans, etc). None of these apply to chip manufacture. The increase in quantum effects as one goes down in size may be a barrier to 7nm but it seems like it would a barrier to working one-off chips as well as to final production.
Which is to say, the skepticism doesn't seem to have a basis. A working chip is an important and necessary step to getting to mass production - clearly mass production would be their aim.
Your supposedly better link agrees: "While it should be stressed that commercial 7nm chips remain at least two years away, this test chip from IBM and its partners is extremely significant for three reasons: it's a working sub-10nm chip (this is pretty significant in itself); it's the first commercially viable sub-10nm FinFET logic chip that uses silicon-germanium as the channel material; and it appears to be the first commercially viable design produced with extreme ultraviolet (EUV) lithography."
And specially stabilized buildings? "NOBODY MOVE! WE'RE ETCHING!"
That was several generations ago, I'm looking forward to seeing what is required to manufacture with high yield at 7nm!
At specific wavelengths (~245-265 nm), (UV) light inactivates quite a few living things. As light is quantized, the purpleish color you see is due to e- stepping down.
The concept is not at all dissimilar to old-fashioned darkrooms in photography. Outside light will wash out your image.
(Ex.: https://en.wikipedia.org/wiki/45_nanometer#Example:_Intel.27... )
As far as fabrication, one problem is that obviously you aren't using visible light to etch features on your wafers. The x-rays must be fun to work with... Not to mention, your photoresist would have to resist x-rays. Getting x-ray-resisting photoresist on and off your wafer must be tricky. Since 7 nm is about the size of several atoms, your wafer probably needs to be almost perfectly pure, which can't be easy, either.
(tldr: 193nm light works down to 28nm lambda; progress requires moving further into UV and/or use immersing in liquid with different refractive index)
I had always gotten the impression that even if it they couldn't get as small the impact of less heat and the ability to cross beams could allow them to be denser. But like I said I don't really know what I'm talking about :)
Further out is still some stuff like graphene or carbon nanotubes. Of course, the structure of the transistor itself might change, which could enable further downscaling. There's already been a switch from planar to Silicon-on-Insulator (e.g. GlobalFoundries, TSMC, Samsung(?)) and Intel has the TriGate (everybody else calls it FinFet).
One day we might see the natural evolution to the gate-all-around FET, which would be something akin to a silicon nanowire (note: planar has the gate on top, FinFET has gate on top, left, and right of the channel). However, there are huge roadblocks in manufacturing to solve. And this could really be an issue. We might very well be able to build at the 5nm node. But if we can't build them fast and cheap enough, noone's going to do it. Manufacturers are already triple-patterning and doing all kinds of voodoo just to keep up with Moore's law.
Good old silicon might actually stay a central part for a much, much longer time.
Remember what Intel did with the FinFETs. It was the same discussion then (What could the next thing be? III-V, SOI, blabla?). At one point, Intel simply came on stage, surprised us with FinFETs and everybody was like: "I guess it's FinFET, then." The idea of FinFETs is actually from the 90s or so. The sole reason why noone did it before is because noone could actually build them at scale (AFAIK, Intel is still the only company that ships FinFETs).
Keep in mind that manufacturing is very hard. It is very unlikely that there will be more than incremental steps. Just changing the channel material is already quite the task.
Also, don't believe any "This is the next transistor!" stuff. You can find these things a lot but they rarely mean more than some department trying to make a bit of publicity.
The upside of SOI is that it's easier to manufacture. That's why we see so much of it around (AFAIK, GlobalFoundries and TSMC still do it).
But the actual way forward is the FinFET. The 7nm chip from the article is actually built with FinFETs. Otherwise that thing would probably not work very well.
Seems like the wonder material.
C has a 5.5eV band gap while Si has a 1.1 and Ge has a 0.67
Whats bandgap? The energy difference between a conductor/insulator in a semi-conductor. https://en.wikipedia.org/wiki/File:Bandgap_in_semiconductor....
The only real reason to switch to carbon chips is noise reduction at the 4nm node (if we ever go that far, we're getting into Long X-Rays at that point for lithography).
Also 4nm node will only be ~16-18 carbon atoms wide.
Diamond has the interesting property of having it's conduction band above vacuum level, so that free electrons would (in principle) fall out of the material. (Surface physics gets in the way of that.)
It's the reason that diamond is used for cold-cathode emitters btw.
Also, interesting to see how things like e-beam litography is pushed once again at least a node into the future. We (as in they) are still able to tune and optimize on the same infrastructure.
I wonder: What organization, really,
is mostly responsible for the newer
fabs? I mean, do each of Samsung, Intel,
IBM, etc. do everything on their own?
Or is there a main company, maybe
Applied Materials, with some help
from, say, some small company for
UV sources, some optics
from, maybe, Nikon, some mechanical
pieces, etc., that does the real work
for all the fabs?
7 nm -- what speed and power increases
will that bring over
14 nm, 22 nm or whatever
is being manufactured now, etc.?
Long live Moore's law! It ain't
over until the fat lady sings,
and I don't hear any fat lady yet!
I was guessing that maybe
for the fabs themselves,
there was some one company
fabs. Or, why reinvent the
wheel several times?
Sure, for the chip design,
say, by Qualcomm, Samsung,
Intel, IBM, that's a lot
of design software, know how,
etc. And, sure, QC has to
be one heck of an Excedrin headache
but with likely some
long standing basic ideas
This is something that interests me. It must be terribly difficult to get up to speed on something like this, even with vigorous state funding. Are there any layman-readable sources on the topic?
IBM's 7nm is a great accomplishment for sure, but we really don't know anything about how it was made from the articles. Essentially SiGe is a bit more conductive and can switch faster than normal Si chips, thanks to quantum tunneling.
Unfortunately this won't sell new consumer hardware on an regular basis.
Eveything as a service. Even hardware could become a service. You wouldn't have to actually own it, instead pay a monthly fee and you have access to produxt X. You get the latest models without any extras fees.
The advantage of having a service is that the customer is hooked and it's harder to leave.
As an idea, I feel it's wonderful. It would massively reduce wastage.
For example, I can put £40 in my sock drawer every month and, in eight months, I could buy a PS4 and use it for the rest of my life. I could then start save that £40 toward a different purchase. Alternately, I could go with the site you listed and pay £40 a month for the rest of my life just for that same PS4, never making headway.
Now, you can point out that a PS4 is almost the definition of an unnecessary luxury and that I don't have to pay the £40 monthly rental fee, and you'd be right. I mostly went with that example because it was right on the front page and such a terrible deal. Still, in the world where I can own things, I can have the PS4 and the £40 a month, after a little over a half year of hardship, while the rental society won't let me have that. Similarly, I can buy a DVD and watch it forever, instead of shelling out for Netflix each month and hoping that they don't drop that title.
All of the items on that front page have a monthly rental cost that's about 1/10 the purchase cost. In most cases you'd be better off with a 12 month personal loan than renting it for 12 months and you get to keep the item. If you want it for 18 months then even buying it on a credit card at 20% looks like a reasonable option.
I agree and I have piles of DVDs to show for it. Here's my problem. One DVD is a fine example, but what about 10, 100, 1000? At some point, it's worth it to use Netflix for storage.
I also have stacks of VHS tapes, and audio cassettes and LPs. The life of my playback hardware is finite. I can refresh the hardware or somehow try to copy my old media to new media. I'll have to think what I might have done different if something like Netflix had existed when I started buying audio and video to own and accumulate.
If you're amazingly lucky. In actuality, you've got to replace it (or have it repaired, which is usually almost as expensive) every few years because it wears out.
And you've got to continue paying for electricity to run it all that time. And the TV (and replacements/repairs for it) to play it on. And the electricity to run the TV. And a payment for the dwelling in which it's housed. (Which you can buy outright… but even if you do, the payments for upkeep and the taxes to keep roads/etc coming to it end up being of a similar order…) Etc, etc.
In practice, unless we lived under a radically different system (far more different than [state] capitalism with/without a guaranteed minimum income, or communism), we pretty much have to have a system in which everyone can afford to continue making regular payments on many if not most of the expensive things they use.
Also hasn't Ibm just sold its division to global foundries? So are they double dipping as usual by licensing them new tech separately?
It will not happen, I know, but "Wouldn't it be nice" was always one of my favorite Beach Boys song (Pet Sounds!) https://www.youtube.com/watch?v=ofByti7A4uM
This is THE chance.
I a former job, I was working on an application that did a lot of computational geometry, and I remember reading one day that then-current POWER chips could do floating point multiplication and division in a single cycle (plus latency for pipelining, of course)...
The intel instruction set is an abomination. You couldn't even thing of something worse, however they managed.
Intel needs competition. Everybody else died away.
PPC64 is on the open, everybody can make one. This is better than the IBM PC revolution, where you could license it to produce competing parts. Also better than the various ARM chips.
We need to break the HW barrier from the last years. CPU speed has stalled, only with new tech, as in the new Z series or Power8 chips Moore's law can be revived.
And there are many more arguments to be excited.
So, seeing another process shrink doesn't excite me given we haven't tapped the potential of what we already have. Lots of technologies help: EDA; FPGA's: S-ASIC's; multi-project wafers; ASIC-proven I.P. And so on. Yet, even 350nm still isn't very accessible to most companies wanting to make a chip because the tools, I.P., and expertise are too expensive (or scarce sometimes). Yet, the benefits are real in so many use-cases (esp security). I'd like to see more companies dramatically bringing the costs down and eliminating other barriers to entry with affordable prices.
Example of the problems and what kind of work we're looking at:
Example direction to go in:
I think the best model, though, is to do what the EDA vendors did: invest money into smart people, including in academia, to solve the NP-hard problems of each tool phase with incremental improvements over time. I'm thinking a non-profit with continuous funding by the likes of Google, Facebook, Microsoft, Wall St firms, etc. A membership fee plus licensing tools at cost, which continues to go down, might do it. Start with simpler problems such as place-and-route and ASIC gate-level simulation to deliver fast, easy-to-use, low cost tools. Savings against EDA tools bring in more members and customers whose money can be invested in maintaining those tools plus funding hardest ones (esp high-level synthesis). Also, money goes into good logic libraries for affordable process nodes. Non-commercial use is free but I.P. must be shared with members.
Setup right, this thing could fund the hard stuff with commercial activity and benefit from academic/FOSS style submissions. With right structure, it also won't go away due to an acquisition or someone running out of money. Open source projects don't die: they just become unmaintained, temporarily or permanently. Someone can pick up the ball later.
I do hope that AMD comes up with something with a single-core performance competitive to an i7 with a power envelope in the ballpark.
best AMD chip cant even compete with cheapest Pentium(celeron) when it comes to IPC