Hacker News new | past | comments | ask | show | jobs | submit login
Graphene Interconnects Aim to Give Moore's Law New Life (ieee.org)
70 points by jnord 11 days ago | hide | past | favorite | 51 comments





Graphene seems to be like a hotshot actor who lands a million auditions but somehow never makes it past a walk-on role in a toothpaste commercial.

There’s a few reasons for this. There’s a few ways to make graphene. You can use CVD or you can use mechanical exfoliation. Mechanical exfoliation requires scotch tape and scales to maybe a flake per hour per grad student. CVD is quite scalable but makes shitty graphene. A lot of graphene breakthroughs (superconductivity for instance) needs mechanically exfoliated graphene.

Secondly, process fab is VERY conservative. There’s numerous amazing ferroelectrics that you can grow tons of that would absolutely spank NAND flash. However, they’re not silicon fab, so nobody makes them.


> There’s numerous amazing ferroelectrics that you can grow tons of that would absolutely spank NAND flash. However, they’re not silicon fab, so nobody makes them.

So why doesn't somebody new start making them and put all the current flash producers out of business?


Silicon technology is already so efficient it would be hugely expensive to compete with it. No one has the money or risk appetite to try.

I’d love a reference or two to read on ferroelectric memory tech…

You might just look for work on HfO2

> scotch tape

Is there actually a special property of scotch tape that makes it the ideal candidate over some more specialized industrial adhesive? Or are these references to scotch tape generally just references to the fact that you _can_ use scotch tape like the original graphene experiment?


The latter. A "super" material worthy of a novel prize being produced using common office supplies is just a fun thing to throw into a story.

It happens to have a good level of stickiness. People also use blue nitto tape, and tape used for fixturing on dicing saws. I think basically anything could work, it's just that people use what's lying around.

But scotch tape is nearly as cheap as grad students.

It takes a long time to go from lab bench and physics papers to practical use to mass produced and generally available practical use.

Graphene has incredible properties as a structural material too but so far producing it at that scale and making it behave properly in things like composites has been very hard. But the physics says once we get it to work we have composites many times stronger than steel or materials like Kevlar.


It's because it's just about impossible to handle: the number one thing a sheet of graphene wants to do is stick another sheet of graphene on top of it and become...regular graphite.

Well someone needs to tell graphene so stop fucking it's coworkers and get back to set

The kids these days are so spoiled. Silicon doping was discovered like when? And how long did it take to make a practical transistor? Seriously through, it's not every new discovered phenomena owes you something.

20 years.

And we have been able to produce graphene around 2004 I believe, so we are going soon to cross that threshold.


I've been watching technology for the last fifty+ years, and I had the same (admittedly unfair) reaction as the OP.

Lol I'm obviously joking, I'm probably younger than both OP and 70% of people out here. But my point that the nature doesn't owe us anything still stands. University press releases are really to blame for building up unrealistic expectations, but then you can't expect them to honestly tell you "we spend millions on things with zero practical applicability just because it's awesome".

True. Guess I’m disheartened by years of clickbait.

It’s okay. Next year we will defeat and reverse aging with one simple trick so you can wait longer, at least according to the latest health science click bait.

I will not rest until I have you immortal, flying your fusion powered car, using augmented reality VR controls, to your very own immersive shopping experience with AI assistant android sexbots catering to your every whim and I will not REM enhanced super-sleep until that happens!

I'll give you fifteen minutes to call me back.

/Jerry Maquire out


The reason is that it’s very difficult to get a consistent product from mining, from what I have heard.

> The sacrificial film is placed on top of the transistor chip, and a source of carbon is deposited on top. Then, using a pressure of roughly 410 to 550 kilopascals, the carbon is forced through the sacrificial metal, and recombines into clean multilayer graphene underneath. The sacrificial metal is then simply removed, leaving the graphene on-chip for patterning.

Incredible


Awesome! Let’s hope Intel — for their sake —- can make this happen.

But I’m already thinking about light CPUs that use light instead of electricity for computation. Of course I don’t fully know how it works but it seems to be lower power and the next iteration of computation I guess before we get to room temp quantum computers.


The problem with light is that it's quite a lot bigger than the features on current chips.

If you get enough other benefits from going up from 2nm features to e.g. 200nm UV-C photons, then you may still choose to do so.


Do optical gates not switch a lot faster? I do not know whether it would be enough to offset the bigger size.

The switches themselves do (IIRC by a factor of about 1e4), but if you have to space them farther apart then the combined whole may not benefit from this.

If you have a system clock running at 3 GHz, the speed of light limits your causal distance to just under 10cm per clock cycle. CPUs are already close to that for size and frequency, but let's say you're taking a 1 cm by 1cm silicon chip on a 2nm feature size process and replacing it like-for-like with a photonic chip with light that limits it to 200nm features — now it's 1m by 1m and can't go faster than 300 MHz, likely a lot less.

This doesn't mean it is useless — for example, there's a hope that it will reduce energy use, which is directly useful all by itself, but also means it may be sensible to move to a fully 3D structure which silicon can't really manage because of the thermal issues. Going from 2D to 3D helps a lot, might allow that 1m by 1m by 200nm (*2 thickness for insulation) sheet to be compacted to a 7.4mm cube, which then doesn't need to be slowed all the way down to 300 MHz due to causality.


Very interesting. What about memory? Current problems are mainly memory bottleneck related. How can one solve that in photonic chips?

Optical storage currently is things like BluRay (non-volatile), plasmons, delay lines (both volatile), or Bose-Einstein condensates (requiring extremely impractical cryogenics).

All of these are much lower density than magnetic (hard drives) or electrical (RAM or flash).

I've not heard of RAM or flash having thermal issues (though I'm sure it would happen eventually), but that suggests 3D stacking is easier for storage, which would remove that potential advantage of optical.

One thing I've seen suggested for optical computing is to create optical elements (e.g. lenses, holograms) that represent the same transformations as a layer of a neural net; they're not at all space-efficient and any changes to the network requires basically replacing the whole thing, and you can do a similar thing with a suitable network of hard-wired resistors and transistors, but they're an interesting idea that I see come up very occasionally.


Intereting question. The answer to the second part: we have much faster switching transistors (GaAs, SiGe, InP, now GaN) already but they cannot be miniaturized easily and the production technology isn't as simple as CMOS. One can build computers with them, but due to physical size and large distances it wouldn't be performing good compared to a CMOS chip. So the answer is: size matters. Large devices cannot be used for building complex fast computers.

SiGe scales down, but we can't afford the power density resulting from building logic with it, so it ends up loosing to CMOS.

Quantum computers are never* going to be good at a whole lot of tasks that classical computers are already used for.

* Some people have weird ideas.


Well, right now, a magical way to resurrect moore's law is no more or less crazy than a magical way to scale quantum computation.

Photonic computation is never going to make sense as an alternative to electrical computation.

Among other reasons, you can create an electronic transistor in silicon by using an electrical signal to open and close a gate.

You can't really do this with light, light beams just pass through each other. And the kind of light-carrying media that can be affected by the presence of a control beam respond much slower and less effectively than doped silicon responds to voltage.


This! And optical waveguides are big, and they need to be spaced apart to avoid interference. Speed of light is limiting for such large circuits to be fast.

Nothing beats CMOS transistors in density.


> light beams just pass through each other.

You're clearly not using enough power for the Schwinger effect. (More seriously, there are other non-linear effects in non-vacuum optical media.)


Let's hope the fab that pioneers this is not owned by Intel

Moore's law is over. Nothing is going to restore that regular cadence of device shrink and performance increases. Each innovation is now a single tiny step in the endgame of scaling.

It feels like the next era and maybe for the rest of humanity’s existence is the Age of the Plateau. I wonder how they will handle it? We lived in such a special time in all of human existence.

We are still many clicks from physical limits for computation, so it depends on how much money we want to spend.

"Rock's law or Moore's second law, named for Arthur Rock or Gordon Moore, says that the cost of a semiconductor chip fabrication plant doubles every four years. As of 2015, the price had reached about 14 billion US dollars."

https://en.wikipedia.org/wiki/Moore%27s_second_law

It seems likely that we're relatively close to the point where it will no longer be economical to push the limits here. It's unlikely that even the entire world working together would want to spend more than $1T for a single fab, which Rock's law suggests is less than 20 years away.


  It seems likely that we're relatively close to the point where it will no longer be economical to push the limits here. It's unlikely that even the entire world working together would want to spend more than $1T for a single fab, which Rock's law suggests is less than 20 years away.
Given that Apple at the start of 2019 was worth $600 billion, and now $3.7 trillion 5 short years later, I think a $1 trillion plant in 2045 is not so farfetched. This is especially true if compute requirements for AI continues to grow.

Twenty years is a long time. I don't think people in 2025 could have predicted out needs for chips back in 2005.


Your are conflating asset price inflation and cost inflation, they are not the same. Apple could lose $2T in market cap next week, the cost of the fab would not be discounted in the same way.

Ok, but let's say the scenario is that Apple, Google, Microsoft need to build a chip plant for 0.5nm chips. They need $1 trillion.

If each company is worth $20 trillion in 20 years, they can easily raise $1 trillion together by selling some shares or using their shares as leverage or just straight up using their cash flow. I'm simplifying things by ignoring inflation, but you get the point.

The bottom line is, if capitalism thinks a $1 trillion fab will produce more than $1 trillion in value, it will happen.


This appears to be missing the forest for the trees.

Which is to say, if Rock's law continues to hold, it doesn't matter if some global consortium can pull together $1T for a new fab; can they pull together $2T four years later? And $4T four years after that? And $8T? $16T? To say that a doubling at this rate is sustainable is to suggest that you more than double the value at each step. At some point this can clearly not be the case, unless you want to posit a world where going from one process node to the next literally doubles the entire productive output of the human race.

Absent some unforeseeable technological breakthrough, at some point it has to slow down, either slowly, or drastically, or otherwise stop altogether. And for anyone who's currently middle age or younger, it's currently projected to happen in your lifetime.


The Asianometry YouTube analyst stated that the most recent iteration would not have happened had it not been for the AI goldrush. So no, we are not far off.

Is there a source to this video?

TSMC has always had N2 on its roadmap long before the AI boom.


That still means cheaper transistors, right?

Hard to believe the claims here, when no real empirical data is presented. Has the process been integrated in any foundry (even test fab)? Have they been able to tape out even an old chip, like 180nm, one with copper, one with graphene? If so, at wafer size and what yield? How many metal layers can be processed (global or local interconnects - if it's pressure based, how will the bottom most layers be affected as the interconnect stack is built up?

Also, back side power delivery + new materials like Ru will keep interconnect roadmap going for a while.

Articles like this read nothing more than fluff pieces.


Does anyone know of a way other than Raman spectroscopy to classify graphene monolayers? I recall making the graphene was simple but confirming it was the real chore.

I put graphene into the same category as quantum computers.

No doubt, massive technological advancement if it can me massively produced.

But we’ve been waiting 20-years and still haven’t realized the benefits.


Graphene is closer than quantum. We just need a process breakthrough for manufacturing and integrating it into products.

You can even make graphene at home with a blender.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: