Hacker News new | comments | show | ask | jobs | submit login
IBM says it has made working versions of 7nm chips (nytimes.com)
239 points by mattee 683 days ago | hide | past | web | 120 comments | favorite



Let's all realize that all of these research branches have been playing around with 10nm and 7nm chips for years now - the fact IBM cobbled together some working chip isn't surprising. Getting it to production is really the vastly more important part.

This press release is equivalent to "Scientists Cures Diabetes in Mice" - a breakthrough that happens about a half dozen times a year but has still yet to make it from the lab to the FDA.

The timing of this press release is entirely to boost investor confidence in IBM and GlobalFoundries given Intel's recent announcement of delays at the 10nm process node.

edit:

The Ars article is vastly better than the above link: http://arstechnica.co.uk/gadgets/2015/07/ibm-unveils-industr...


>This press release is equivalent to "Scientists Cures Diabetes in Mice" - a breakthrough that happens about a half dozen times a year but has still yet to make it from the lab to the FDA.

Chip manufacturers like Intel and IBM have regularly made good on promises of exponential progress for at least a half century. Comparing them to press release-pushing biomedical researchers is tantamount to a slur.


Nitpick:

> Comparing them [chip manufacturers] to press release-pushing biomedical researchers is tantamount to a slur.

No, it isn't. Slower progress in biomedical research isn't a result of biomedical researchers exhibiting any of the qualities whose unwarranted attribution normally constitutes slur. It is the result of much greater complexity, lower predictability, higher safety requirements and weaker human understanding of biological systems compared to semiconductors.


The point is that most semiconductor predictions come true, whereas biomed predictions are much less reliable. Unfortunately, that is a reflection on the latter's practitioners as they are aware of their poor odds yet still publish.


I think this is unfair. Scientists often have a narrow-scope breakthrough in an extremely technical area and when they're asked to dumb it down for a wider audience, the tech press / university PR team runs wild. Something like curing diabetes is going to taken hundreds or thousands of small incremental improvements and breakthroughs so when they say "Could lead to a cure!!" they're usually correct but the nuance is often missed.


There is no excuse for publishing anything that does not stand up to replicability and a significantly high enough threshold chance that published prediction will be realised. Hence the OP is correct in pointing out the unfairness of equating comparatively reliable semiconductor process improvement predictions with the relative dartboard that is biotech.

If third parties ("PR") hijack the truth, it is up to the researcher publicly to denounce them.

If, as I suspect, such denunciation is bad for a researcher's funding, then we have a problem in research, if indeed, in such circumstances, it can even be called research (as opposed to, say, "marketing").

Clearly biotech is a younger field than semiconductors, and it should be given a wide berth to make mistakes without prejudice, but that does not exonerate it from explicitly communicating the expected uncertainty of its results.


The issue is less the progress of biomedical researchers, moreso the discrepancy between the headlines and the actual results.

Most of the blame lies with the scientific press, but the researchers don't seem to mind it all that much either. Misleading or overly-optimistic press releases written by university personal are also the source of much of it.


> Misleading or overly-optimistic press releases written by university personal are also the source of much of it.

Whatever it takes to get those sweet, sweet grant dollars


Color me astonished that a brand new account named POWERfan is vigorously defending IBM in a online discussion forum.


The Ars article you refer to appears to be fairly bullish on IBM's process technology and in particular the extreme ultraviolet lithography which has been problematic elsewhere. IBM has deep history of fundamental research turning into real product: just look at magneto-resistive drive heads as one example. I am much less sceptical than you that this company which has proven over many decades that it can drive fundamental technology forwards is only doing this for reasons of bamboozling the competition / investors.

Let's not forget that the chip in the Z series mainframes is the fastest commercial piece of silicon ever produced, and the high end Power8 chips handily outrun top-of-the-range 18-core Xeons on a number of benchmarks (though at worse power envelopes). (http://www.anandtech.com/show/9193/the-xeon-e78800-v3-review...).


I worked at IBM research for a bit (TJ Watson center Yorktown), when Gerstner was the CEO. They had PHDs chemists and physicists and mathematicians working on all sorts of things chip related. They had a mini fab in the building. I remember them testing building vibration levels.

Turning technology into something that can me manufactured and sold was something was something definetely on the mind of research. IBM was spending 6 Billion a year on research and they were looking for more results out of it.

They knew that an discovery/invention was good, but one that could be brought to market was better. The licensed a lot of their tech to the chip machine manufacturers if I remember correctly. Plus back in those days IBM had chip making facilities.


Another issue: currently the biggest bottleneck , cost wise, is in lithography - the process we use to draw the transistors into chips. Because of this issue, the two latest generations of chips are more expensive(per transistor) than an older version - stopping moore's law.

And moore's law probably won't return to life, until we learn how to solve that problem, which the current work doesn't help with.


This is where 450mm wafers and EUV (extreme ultraviolet lithography) were supposed to come in. EUV relieves the need for double patterning and the tremendous additional costs that entails (and was used to manufacture this 7nm chip).

The CEO of Applied Materials, Gary Dickerson, has stated that the 450mm wafer timeline “has definitely been pushed out from a timing standpoint.” That’s incredibly important, because the economics of 450mm wafers were tied directly to the economics of another struggling technology — EUV. EUV is the follow-up to 193nm lithography that’s used for etching wafers, but it’s a technology that’s spent over a decade mired in technological problems and major ramp-up concerns.

Toasting to the death of Moore's Law: https://www.youtube.com/watch?v=IBrEx-FINEI


And for comparison, scale fans, let's remember that we're talking about making 7nm features on wafers that are nearly a foot and a half wide, using near-as-dammit x-ray wavelengths.

A few teething problems would be expected.

7nm has been struggling for a while, 5nm is likely to be late, and I don't think anyone really knows what happens after that.

Longer term, industrial manufacture is probably going to have to move to something exotic like nano-assembly of individual atoms, with some extra finagling to work around tunnelling effects. (Easier said than done...)


... and why would we invest the money to do that when there is not enough (software-driven) demand for that performance.

The average person uses PCs and mobile devices to browse the web, write documents, order an Uber, and maybe play games. Nothing much is being done on the software front that challenges current systems. Maybe if VR took off or we got home applications for AI like domestic robotics that would change. I could see a domestic robot capable of folding clothes, cleaning up, etc. needing a low power chip that can do what a dual-12-core Xeon can do on smart phone power and thermal profiles. <5nm might be needed to accomplish that.

I'm not sure server and high-end compute demand is sufficient to pay for the R&D that would be required to far beyond 7nm.

But the good news is that we haven't even scratched the surface of what current systems could theoretically accomplish. Look into the demo scene and prepare to be blown away by what 8-bit CPUs in the 1980s could accomplish with non-crap code running on them. Maybe we need a software Moore's Law to take over for the hardware one -- right now software has more of an erooM's Law.

One thing is clear: if you do software, ball's in your court either way. Either you need to invent killer apps to keep demand high for high performance computers -- things that really need that much power -- or you need to take over for the hardware people and start finding new efficiencies.

Ball's really been in software's court for a while anyway with multi-core... linear performance max'd out (for consumer chips) a while ago.


I agree that most of the demand requires software innovation , but there are other good sources of demand, for example:

1. AI - variety of applications, both for consumer and business markets.

2. Telepresence. If we can get the real feeling of "being there" to telepresence, at a price point that's attractive for the consumer.

3. Simulation. Currently it's a complex process ,mostly done by experts. If it can be a tool for regular engineers , and maybe further down the road - for combining that with some sort of genetic-algorithms , maybe there's potential for a huge demand increase.


GPUs are basically as big as possible, and still can't really render at 4K. Now if I want to have a ray-traced Game Engine, fuggetaboutit.


> and EUV (extreme ultraviolet lithography) were supposed to come in

I wonder if they could use electrons instead of light to etch the surface.


Electron-beam lithography is a technique that works, but because it's very slow, it's also very expensive. There are efforts to parallelize e-beam writing, but they face hard challenges. For example, if you want to pattern faster, you just need to shoot more electrons. But if you shoot too many electrons, they repel each other and the pattern blurs. It's a difficult problem to overcome, but people are working on it.


I think we've been wasting far too much processing power in inefficient software for the past few decades, and it's only Moore's Law that let it happen for so long. Now that it's coming to an end, maybe we'll see more emphasis on efficiently optimised software and mindful resource usage.


This great article supports that. http://spectrum.ieee.org/semiconductors/design/the-death-of-...

It has a great graph of engineering effort vs Moore's law which made it cheaper to just wait for a faster chip then put in the effort.


I think you're incorrect and remembering performance that never existed based on shortcomings you glossed over at the time because you have that all too human bias of believing that things were better when you were younger.


There is so many low hanging fruit in software design that is simply there because of legacy design trade-offs and the cost associated with replacing those - we could gain huge performance gains over night if we for eg. eliminated reliance on hardware memory protection and context switching from kernel space to user space by using languages that can prove memory safety in software.

Then imagine how much performance you could get out of OS level VMs that understand the processes at VM level (ie. can access code in some IR that they can analyze easily, recompile it on the fly, etc.) there is already stuff like this in specialized markets (eg. kernel level GC for JVM) but it's still fairly specific.

Then there's all the shitty legacy abstraction layers in things like filesystems - ZFS is a perfect example of what kind of gains you can get for free if you just rethink the design decisions behind current stack and see what applies and what doesn't.

If the benefit of rewriting these systems ever overcomes the cost - we have huge potential areas for performance gains, modern systems are very far from being performance efficient, they are efficient based on various other factors (development cost, compatibility, etc.)


I wish Linux would just merge a ZFS implementation into the kernel already.

I also wish ZFS would grow an encryption layer (one that isn't based on Sunacle's implementation, since Sunacle doesn't want to share that one thus no one can use it).


I understand what you're saying, but do you really think most of a modern Android (to take the theme to its Javaesque extreme) stack is the most efficient way to accomplish processing?

Compare that to some of the code people ran through 6502-derivatives.

Abstraction may be more efficient in terms of programmer time, and performance efficiency may be high enough so as to be immaterial, but the two shouldn't be conflated.


> do you really think most of a modern Android (to take the theme to its Javaesque extreme) stack is the most efficient way to accomplish processing?

Reminds me of a version of this image[1] which has a discussion superimposed over it that says, "but if he had a big enough pile of ladders he could get over the wall!" and someone responds, "welcome to Android optimization." I think we see something similar with Javascript performance.

[1] http://i.imgur.com/AWG7LqR.jpg


Ha! That's the first non- https://interfacelift.com/ image I've put as my background for a while.

The older I get, the more I start seeing over-complexity in stacks as a security risk as well. I feel like there's a fundamental maximum to the number of levels of abstraction one can keep in one's head "enough" to avoid creating layer interaction bugs. Stack overflow, indeed. :)


We already see this comeback of C++ instate of truing to build everything in managed languages, like writing OS in C# .


"Coming to an end"? The sky isn't falling yet. Just because they're having some trouble with one process doesn't mean the whole party is over.

There are other materials to make chips out of besides silicon, gallium arsenide and carbon for instance, each of which has different scaling properties.

There's also ways to make chips more dense by stacking wafers instead of trying to shrink features.


It can sometimes be comforting to know that the universe imposes fundamental limits on how how efficiently computation can be done. Comforting because IIRC we've still got at least 15 orders of magnitude improvement available. But while I'm sure you're right that we're going to be able to switch to different materials (or maybe away from transistors entirely) when progress in silicon runs out we might have to expect an interregnum while other computational substrates are developed to the point where they can provide higher performance.

Stacking is certainly a thing and it's good for memory (see AMD's newest graphics card) but power dissipation provides limits in terms of how much high speed logic you can put under a given area.

http://www.anandtech.com/show/9266/amd-hbm-deep-dive


> This press release is equivalent to "Scientists Cures Diabetes in Mice" - a breakthrough that happens about a half dozen times a year but has still yet to make it from the lab to the FDA.

Well, incremental lab improvements of this and that technique do make it into practice all the time. The failure of various biological researches is symptom of some fundamental brokenness or inherent hardness to biological research (biological systems are inherently messy - the ability of biologists to work with uniform, mass-produced mice is actually a hindrance when they try to apply those researches to non-uniform humans, etc). None of these apply to chip manufacture. The increase in quantum effects as one goes down in size may be a barrier to 7nm but it seems like it would a barrier to working one-off chips as well as to final production.

Which is to say, the skepticism doesn't seem to have a basis. A working chip is an important and necessary step to getting to mass production - clearly mass production would be their aim.

Your supposedly better link agrees: "While it should be stressed that commercial 7nm chips remain at least two years away, this test chip from IBM and its partners is extremely significant for three reasons: it's a working sub-10nm chip (this is pretty significant in itself); it's the first commercially viable sub-10nm FinFET logic chip that uses silicon-germanium as the channel material; and it appears to be the first commercially viable design produced with extreme ultraviolet (EUV) lithography."


Great talk that describes how modern (as of 2011) computer chips are manufactured: https://www.youtube.com/watch?v=NGFhc8R_uO4


This EEVBlog episode on silicon chips is also great: https://www.youtube.com/watch?v=y0WEx0Gwk1E


excellent talk. here is another SA article 'the first nano-chips' : http://community.nsee.us/courses/376-0_nanomaterials_sp04/na...


Thanks for this. Great presentation. I learned a lot.


No mention of Intel anywhere in the article and how far along they are. Also 7nm blows my mind. I mean current CPUs already blow my mind with how tiny the transistors are getting.

And specially stabilized buildings? "NOBODY MOVE! WE'RE ETCHING!"


I had a tour of AMD's Dresden fab several years ago, it was literally a building inside a building, with the inner building mounted on shock absorbers to isolate it from vibrations. IIRC the manufacturing chain was entirely automated - silicon in, chips out.

That was several generations ago, I'm looking forward to seeing what is required to manufacture with high yield at 7nm!


Is it possible for general members of the public to arrange tours of microprocessor fabs anywhere?


With all the clean rooms required, I doubt it, but I don't know.


We have an ultra-clean room. It has a window.


My university has clean rooms, they have windows, the hallways have windows too so you can peek in from outside. Always great to show to visitors :) (at night they have weirdly colored lighting, pink, purple, so it looks really funky, no idea why)


It's possible that when nobody is working in the room, the lights aid in maintaining a sterile environment.

At specific wavelengths (~245-265 nm), (UV) light inactivates quite a few living things. As light is quantized, the purpleish color you see is due to e- stepping down.


My UVC bulbs deployed in a FL home water treatment system pipe out UV in the region of 253.7nm. An ophthalmologist friend pointed out that unfiltered long term exposure to these wavelengths first catalyzes conjunctivitis (pink eye) then conditions go down hill from there. It's suggested that you don't stare directly at sources of this purplish color for any protracted periods.


Nope. Our bio hoods have a toggle switch. Lights off is in the middle, lights on is one direction, UV sterilization is the other direction. To turn the lights on, you have to turn the UV off.


Clean rooms that work with semiconductors use light-sensitive photoresists. I've always dealt with yellow windows and yellow covers over the lights, though in theory pink would accomplish the same thing: keeping blue light off the wafers while you're working with them.

The concept is not at all dissimilar to old-fashioned darkrooms in photography. Outside light will wash out your image.


Certain parts of the cleanroom will look yellow--these are the spaces where they do lithography with resists that are openly accessible to the air. UV-sensitive resists will chemically crosslink in UV light, so they put on filters on the fluorescent lights (or use special lights) that keeps the wavelength away from the higher-energy blues and towards the lower-energy yellows and reds.


It's probably more the secrecy of their process that stops such tours from being available. IC manufacturing involves tons of proprietary trade secrets and NDA'd stuff.


I'm not sure what secrets you would gather from a tour since everything happens in enclosed machinery.


Were you doing the tour as a vendor or a customer?


The M68k was produced on a 3500nm process, which at 7nm you can fit 2 whole M68k dyes within an original M68k's transistor.


That would mean that with 7nm process once could fit ~136000 (68000*2) M68ks in its original dye. Mind-boggling.


Mind duly boggled.


Given the way we're going, I wonder how soon it will be cheaper and easier to build a cpu plant in orbit...


One covered in lead to protect the lito from high energy particles. This might be the time when pushing an asteroid into one of the Lagrange points and hollowed out would be useful. Go SF dreams, go! (Where is Musk when we need him?)



Actually Intel was mentioned and its struggle with the 14 mm node.


I think you're off by a factor 1000000 ;-)


Yes, May I blame autocorrect?


I think even me and my soldering iron could manage @ 14mm :)


The lattice spacing of silicon is ~0.54nm so 7nm is around 13 lattice spacing, it's really impressive. Slowly but surely we will hit atomic limits.


The node size do not correspond to a given dimension of a structure anymore, it's computed property, from area of a given standard cell (e.g. SRAM cell).


That's informative but doesn't disagree. It's not a 1:1 correspondence but it's a pretty close correlation to the actual widths of things.


silicon is diamond cubic; 0.54 nm corresponds to 8/sqrt(3) radiuses worth. So the diameter of silicon is 0.23nm, and 7nm = ~30 silicon atoms across.


Wow, 7 nanometer is incredible! I wonder how small they can get silicon / silicon-germanium based chips before we have to resort to other techniques such as light processors (since light can be closer and even cross each other without issue). 10 nanometers that they're introducing next year is also incredible, at least to me since I'm not a hardware engineer and can't imagine how difficult manufacturing these are.


Process names have little to do with actual feature sizes any more. I think 130nm was the last where the name had any correlation to the actual physical dimensions.

(Ex.: https://en.wikipedia.org/wiki/45_nanometer#Example:_Intel.27... )


I could be wrong, but I think optical processors would have serious diffraction problems with a 7 nm feature size, since the wavelength of blue light is somewhere around 300 nm.

As far as fabrication, one problem is that obviously you aren't using visible light to etch features on your wafers. The x-rays must be fun to work with... Not to mention, your photoresist would have to resist x-rays. Getting x-ray-resisting photoresist on and off your wafer must be tricky. Since 7 nm is about the size of several atoms, your wafer probably needs to be almost perfectly pure, which can't be easy, either.


It's not xrays, it's "extreme ultraviolet" (EUV), plus diffraction-based multiple patterning. http://www.extremetech.com/computing/160509-seeing-double-ts...

(tldr: 193nm light works down to 28nm lambda; progress requires moving further into UV and/or use immersing in liquid with different refractive index)


Hmm yeah I have no idea, I mostly read this which really intrigued me but I don't really understand all of the technology and science behind it http://www.intel.com/pressroom/archive/releases/2010/2010072...

I had always gotten the impression that even if it they couldn't get as small the impact of less heat and the ability to cross beams could allow them to be denser. But like I said I don't really know what I'm talking about :)


You can't cross light beams at 7nm, if you had a "crossroads" structure" it would simply diffract round the corner and exit at all three other points.


This should answer your question: https://www.youtube.com/watch?v=rtI5wRyHpTg


Interesting, thanks!


Really great video.


They use germanium.


It's silicon-germanium (so still has silicon in it). I'm curious what comes after :)


III-V semiconductors like Gallium Arsenide have a high mobility and are probably tested right now in Research labs around the world for CMOS-transistors (it is already in use for something like solar panels). But there's always the problem of how to scale these things as well as silicon based manufacturing.

Further out is still some stuff like graphene or carbon nanotubes. Of course, the structure of the transistor itself might change, which could enable further downscaling. There's already been a switch from planar to Silicon-on-Insulator (e.g. GlobalFoundries, TSMC, Samsung(?)) and Intel has the TriGate (everybody else calls it FinFet).

One day we might see the natural evolution to the gate-all-around FET, which would be something akin to a silicon nanowire (note: planar has the gate on top, FinFET has gate on top, left, and right of the channel). However, there are huge roadblocks in manufacturing to solve. And this could really be an issue. We might very well be able to build at the 5nm node. But if we can't build them fast and cheap enough, noone's going to do it. Manufacturers are already triple-patterning and doing all kinds of voodoo just to keep up with Moore's law.

Good old silicon might actually stay a central part for a much, much longer time.


Could the vacuum channel transistor[1] also be a contender?

[1] http://spectrum.ieee.org/semiconductors/devices/introducing-...


There's no one way forward. A lot of things might happen and it will only be decided once a company actually ships a new and viable technology.

Remember what Intel did with the FinFETs. It was the same discussion then (What could the next thing be? III-V, SOI, blabla?). At one point, Intel simply came on stage, surprised us with FinFETs and everybody was like: "I guess it's FinFET, then." The idea of FinFETs is actually from the 90s or so. The sole reason why noone did it before is because noone could actually build them at scale (AFAIK, Intel is still the only company that ships FinFETs).

Keep in mind that manufacturing is very hard. It is very unlikely that there will be more than incremental steps. Just changing the channel material is already quite the task.

Also, don't believe any "This is the next transistor!" stuff. You can find these things a lot but they rarely mean more than some department trying to make a bit of publicity.


IBM has some good news on III-V materials as well.

https://www.aip.org/publishing/journal-highlights/futuristic...


IBM was one of the first big companies to develop/commercialize SOI technologies.


SOI is better than the planar design since you can fully deplete the channel. But it's worse than FinFETs when you consider the control of the gate on the channel and when it comes to thermals. The oxide below the channel also acts as a thermal isolator. So the heat can't be transported away as efficiently.

The upside of SOI is that it's easier to manufacture. That's why we see so much of it around (AFAIK, GlobalFoundries and TSMC still do it).

But the actual way forward is the FinFET. The 7nm chip from the article is actually built with FinFETs. Otherwise that thing would probably not work very well.


Graphene looks promising (as it is for everything).


Hehe, yeah I was just reading a magazine this morning talking about it's the next big material in cycling for everything from strengthening carbon fiber parts to conduct electricity to replace cables used for shifting/brakes to hearth rate monitoring clothing etc...

Seems like the wonder material.


Carbon sucks in chips.

C has a 5.5eV band gap while Si has a 1.1 and Ge has a 0.67

Whats bandgap? The energy difference between a conductor/insulator in a semi-conductor. https://en.wikipedia.org/wiki/File:Bandgap_in_semiconductor....

The only real reason to switch to carbon chips is noise reduction at the 4nm node (if we ever go that far, we're getting into Long X-Rays at that point for lithography).

Also 4nm node will only be ~16-18 carbon atoms wide.


Graphene is a massive pain to manufacture. For example, I don't think you can sputter it on to wafers.


The last time I checked, diamond looked promising.


Doubtful: You can't really dope it n-type.

Diamond has the interesting property of having it's conduction band above vacuum level, so that free electrons would (in principle) fall out of the material. (Surface physics gets in the way of that.)

It's the reason that diamond is used for cold-cathode emitters btw.


Very interesting. Good to see that the article points out that going from working transistors to commercial viable industrial process is also a big challenge. There are a lot of technologies and industry players that need to solve big problems before the node can start deliver. But that is what ITRS is for.

Also, interesting to see how things like e-beam litography is pushed once again at least a node into the future. We (as in they) are still able to tune and optimize on the same infrastructure.


As I recall, there is microelectronics fab work in Taiwan, South Korea, and, in the US, at IBM and Intel, at least. And maybe China and Russia are trying to get caught up in fabs.

I wonder: What organization, really, is mostly responsible for the newer fabs? I mean, do each of Samsung, Intel, IBM, etc. do everything on their own? Or is there a main company, maybe Applied Materials, with some help from, say, some small company for UV sources, some optics from, maybe, Nikon, some mechanical pieces, etc., that does the real work for all the fabs?

7 nm -- what speed and power increases will that bring over 14 nm, 22 nm or whatever is being manufactured now, etc.?

Long live Moore's law! It ain't over until the fat lady sings, and I don't hear any fat lady yet!


IBM/Intel/Samsung buy tools from various companies. By "tools", I really mean huge pieces of instrumentation that cost many (tens to hundreds) millions of dollars from other companies that are used for the various processing steps (deposition/growth of materials on wafers, patterning resists, etching, etc). The development of each of these tools is immensely difficult and challenging and making them talk to each other and designing manufacturing pipelines is another immense challenge. IBM/Intel/Samsung's job is to design chips (a immense challenge on its own), come up with a process to manufacture them, and then take each of these very complex tools, integrate them into a manufacturing pipeline (with QC), and manufacture the devices that they want.


How much is like turn key and how much is lots of one off, proprietary engineering and system integration?

I was guessing that maybe for the fabs themselves, mostly there was some one company that delivered fabs. Or, why reinvent the wheel several times?

Sure, for the chip design, say, by Qualcomm, Samsung, Intel, IBM, that's a lot of design software, know how, etc. And, sure, QC has to be one heck of an Excedrin headache but with likely some long standing basic ideas for testability.


> And maybe China and Russia are trying to get caught up in fabs.

This is something that interests me. It must be terribly difficult to get up to speed on something like this, even with vigorous state funding. Are there any layman-readable sources on the topic?


The press articles about this generally are misleading in that they use Silicon-Germanium as the catch phrase that's represents the breakthrough. Whereas in fact SiGe processes have been available for at least a decade. I know this because I developed chips for an IBM SiGe process a decade ago, and in college I did a research paper on semiconductor "superlattices" using an old textbook from our school library. It's not a new technology by any means.

IBM's 7nm is a great accomplishment for sure, but we really don't know anything about how it was made from the articles. Essentially SiGe is a bit more conductive and can switch faster than normal Si chips, thanks to quantum tunneling.


Time to start working on the 7km chip. Fibre everywhere, content delivery servers everywhere, game servers out the wazoo so my crappy media streaming gadget or VR headset can remotely pull in the latest movies and games in 4K with minimal lag. You could outfit a few of the world's major cities for the cost of a new fab.

Unfortunately this won't sell new consumer hardware on an regular basis.


You can build an entire new industry around it.

Eveything as a service. Even hardware could become a service. You wouldn't have to actually own it, instead pay a monthly fee and you have access to produxt X. You get the latest models without any extras fees.

The advantage of having a service is that the customer is hooked and it's harder to leave.


These folks are trying something like that: https://saybyebuy.com/

As an idea, I feel it's wonderful. It would massively reduce wastage.


What scares me about proposals like this is that it's further stratification of society into two separate classes: those who actually own everything and those who must appease the owners.

For example, I can put £40 in my sock drawer every month and, in eight months, I could buy a PS4 and use it for the rest of my life. I could then start save that £40 toward a different purchase. Alternately, I could go with the site you listed and pay £40 a month for the rest of my life just for that same PS4, never making headway.

Now, you can point out that a PS4 is almost the definition of an unnecessary luxury and that I don't have to pay the £40 monthly rental fee, and you'd be right. I mostly went with that example because it was right on the front page and such a terrible deal. Still, in the world where I can own things, I can have the PS4 and the £40 a month, after a little over a half year of hardship, while the rental society won't let me have that. Similarly, I can buy a DVD and watch it forever, instead of shelling out for Netflix each month and hoping that they don't drop that title.


Exactly. This is a very old business model; the UK company "Radio Rentals" pioneered it in the 30s and was very successful in the era when TVs were expensive (compared to housing!). It's been largely obliterated by cheap credit.

All of the items on that front page have a monthly rental cost that's about 1/10 the purchase cost. In most cases you'd be better off with a 12 month personal loan than renting it for 12 months and you get to keep the item. If you want it for 18 months then even buying it on a credit card at 20% looks like a reasonable option.


> Similarly, I can buy a DVD and watch it forever, instead of shelling out for Netflix each month and hoping that they don't drop that title.

I agree and I have piles of DVDs to show for it. Here's my problem. One DVD is a fine example, but what about 10, 100, 1000? At some point, it's worth it to use Netflix for storage.

I also have stacks of VHS tapes, and audio cassettes and LPs. The life of my playback hardware is finite. I can refresh the hardware or somehow try to copy my old media to new media. I'll have to think what I might have done different if something like Netflix had existed when I started buying audio and video to own and accumulate.


I could buy a PS4 and use it for the rest of my life

If you're amazingly lucky. In actuality, you've got to replace it (or have it repaired, which is usually almost as expensive) every few years because it wears out.

And you've got to continue paying for electricity to run it all that time. And the TV (and replacements/repairs for it) to play it on. And the electricity to run the TV. And a payment for the dwelling in which it's housed. (Which you can buy outright… but even if you do, the payments for upkeep and the taxes to keep roads/etc coming to it end up being of a similar order…) Etc, etc.

In practice, unless we lived under a radically different system (far more different than [state] capitalism with/without a guaranteed minimum income, or communism), we pretty much have to have a system in which everyone can afford to continue making regular payments on many if not most of the expensive things they use.


In the "rental" situation you still pay for all of those things, but with the added overhead of corporate bureaucracy.


OnLive tried the games streaming thing and failed.


I used OnLive and really enjoyed it, for the most part. I think it was just ahead of its time. I personally would rather stream from a render farm than lug a big noisy gaming tower around. I think this model will be tried again successfully in the near future.


Still in research phase, from a company known for having 10% yields last time they innovated in the processor space

Also hasn't Ibm just sold its division to global foundries? So are they double dipping as usual by licensing them new tech separately?


IBM just sold their division to GF -- but they're maintaining an R&D only operation. IBM, GF and Samsung are all part of the "Common Platform" to share R&D costs:

http://www.commonplatform.com/


YES! Kill Intel, PPC64 everywhere :)

It will not happen, I know, but "Wouldn't it be nice" was always one of my favorite Beach Boys song (Pet Sounds!) https://www.youtube.com/watch?v=ofByti7A4uM

This is THE chance.


I'm upvoting you simply because I have had a pretty good day and because I love "Wouldn't It Be Nice". (In fact, Pet Sounds is an awesome album.)

I a former job, I was working on an application that did a lot of computational geometry, and I remember reading one day that then-current POWER chips could do floating point multiplication and division in a single cycle (plus latency for pipelining, of course)...


Why exactly do you care so much?


As POWERfan said.

The intel instruction set is an abomination. You couldn't even thing of something worse, however they managed.

Intel needs competition. Everybody else died away.

PPC64 is on the open, everybody can make one. This is better than the IBM PC revolution, where you could license it to produce competing parts. Also better than the various ARM chips.

We need to break the HW barrier from the last years. CPU speed has stalled, only with new tech, as in the new Z series or Power8 chips Moore's law can be revived.

And there are many more arguments to be excited.


I can't speak for the GP, but POWER is a true RISC ISA, and many prefer it just for that reason.


There is no such thing as CISC or RISC anymore, what with micro ops.


I know nothing about silicon fab, but I can't help but wonder how they mitigate the effects of quantum tunneling at such a small scale?


It's neat but will only benefit the largest companies with the most elite developers. I've learned a lot about hardware development in past year for purposes of imagining clean-slate, subversion-resistant chips. The work it takes to get the 90nm and below chips working, especially inexpensively, is pretty mind boggling with many aspects still dark arts shrouded in trade secrets. Many firms stay at 130-180nm levels with quite a few still selling tech closer to a micron than a 28nm chip. Tools to overcome these challenges cost over a million a seat.

So, seeing another process shrink doesn't excite me given we haven't tapped the potential of what we already have. Lots of technologies help: EDA; FPGA's: S-ASIC's; multi-project wafers; ASIC-proven I.P. And so on. Yet, even 350nm still isn't very accessible to most companies wanting to make a chip because the tools, I.P., and expertise are too expensive (or scarce sometimes). Yet, the benefits are real in so many use-cases (esp security). I'd like to see more companies dramatically bringing the costs down and eliminating other barriers to entry with affordable prices.

Example of the problems and what kind of work we're looking at: http://eejournal.com/archives/articles/20110104-elephant/

Example direction to go in: http://fpgacomputing.blogspot.com/2008/08/megahard-corp-open...

I think the best model, though, is to do what the EDA vendors did: invest money into smart people, including in academia, to solve the NP-hard problems of each tool phase with incremental improvements over time. I'm thinking a non-profit with continuous funding by the likes of Google, Facebook, Microsoft, Wall St firms, etc. A membership fee plus licensing tools at cost, which continues to go down, might do it. Start with simpler problems such as place-and-route and ASIC gate-level simulation to deliver fast, easy-to-use, low cost tools. Savings against EDA tools bring in more members and customers whose money can be invested in maintaining those tools plus funding hardest ones (esp high-level synthesis). Also, money goes into good logic libraries for affordable process nodes. Non-commercial use is free but I.P. must be shared with members.

Setup right, this thing could fund the hard stuff with commercial activity and benefit from academic/FOSS style submissions. With right structure, it also won't go away due to an acquisition or someone running out of money. Open source projects don't die: they just become unmaintained, temporarily or permanently. Someone can pick up the ball later.

Thoughts?


I don't think the diameter of a DNA strand is 2.5nm...


Return of the PowerPC Mac?


Ha, not likely. But because IBM and GF will manufacture for 3rd parties there's at least hope for AMD now if they get a better design.


Also research devices like this: https://en.wikipedia.org/wiki/TrueNorth


I don't think that AMDs designs are particularly bad... an FX-8350 can go toe to toe with a higher end i5. Of course that's without the power savings of a newer process.

I do hope that AMD comes up with something with a single-core performance competitive to an i7 with a power envelope in the ballpark.


>single-core performance competitive to an i7

best AMD chip cant even compete with cheapest Pentium(celeron) when it comes to IPC


But it will support 32GB of ECC ram and more than two simultaneous processes. ;-)


Did someone say ASIC?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: