Hacker News new | past | comments | ask | show | jobs | submit login
Three Generations of Asynchronous Microprocessors (2003) [pdf] (caltech.edu)
56 points by steven741 9 months ago | hide | past | web | favorite | 15 comments



I've always been fascinated by asynchronous hardware design.

When you first learn about H/W design in Verilog or VHDL, it feels like your mind is immediately shoehorned and molded by both the literature and your teachers into thinking synchronously (everything has to be clocked, and signals crossing clock domain boundaries are to be treated like they're some form electronic nitroglycerin).

As a matter of fact, HDL courses often go without so much as a mention of the possibility of asynchronous design, or if they mention it, it's to say something along the lines of "just don't".

It almost feels like designing things asynchronously is a sin to be avoided at all cost.

Even further, the synthesis tools themselves will yell at you when you try to do things asynchronously (probably because many of the optimization that can be done with a clocked design don't work).

Granted: asynchronous design is harder and bug-prone, but - at least to someone like myself with more of a software engineering background - it also feels a lot more natural than being forced into the straight jacket of clocks.

This all leads to the unfortunate situation that while there's been a lot written about how to design with clocks, I haven't come across much literature about asynchronous design methodologies.

If anyone on HN has a a reference on asynchronous hardware design methodologies, I'd love to dive into that.


The major thing with a clock-based design is that you have to dedicate a huge amount of silicon and power just towards getting a clock signal all over the whole chip.

A group at Cambridge University also designed and fabricated an asynchronous CPU based on an early ARM a long time ago now. Like the ones mentioned in this article, it behaved very well. It would slow down if they pointed a hot air blower at it.


Reminds me of the way loud noises can cause permanent data-loss on hard-drives [0]. This is a real problem with loud fire-suppression gas-release systems [1].

[0] https://www.ontrack.com/blog/2017/01/10/loud-noise-data-loss... [1] https://www.zdnet.com/article/how-a-loud-noise-brought-a-dat...


Not thinking of AMULET from Steve Furber @ University of Manchester?


Hmm. It does appear that perhaps when I was wondering around the Cambridge labs they may have claimed a little more than they should have. Maybe someone working there was collaborating with the Manchester group.


> It would slow down if they pointed a hot air blower at it.

Ah! Made my day :D


Rajit Manohar (also from Caltech) is an expert in design tools for asynchronous hardware, and moved from Cornell to Yale where he runs an asynchronous VLSI design group [0].

I've worked in groups that have shipped a few chips with his tools and AFAIK they're still the best out there. That said it's still more work than building a synchronous chip, but the power savings can be huge. Rajit has a bunch of neat chips published on his site [1].

Surprising to me was that some of the design foundation for hardware actually borrows from CS, leveraging concepts like communicating sequential processes to provide more formal ways of designing asynchronous circuits - a cool melding of fields!

[0] http://avlsi.csl.yale.edu/research.php

[1] http://avlsi.csl.yale.edu/chips.php


If you ran across this paper, you could continue with this one:

https://ieeexplore.ieee.org/document/1652900/

It's from the Proceedings of the IEEE special issue on async:

https://www.researchgate.net/publication/2985481_Special_Iss...

We didn't use any standard tools for the designs described, almost everything was written in the group. I think our experience was that it was less bug-prone than synchronous design (as far as DIGITAL bugs go), but it is harder to ensure that your analog circuits have the expected digital behavior.


Determining the timing on an asynchronous CPU is nearly impossible. Especially as you start to scale the cores on the die. There's a very good reason why Profs. try to steer students away from them.

From a commercial perspective, I'd imagine finding customers who want the CPU, as-is, off the shelf would be rare. But, making adjustments to the inherently fragile design is going to be expensive and time consuming. Even if the customer has deep pockets they probably won't be willing to wait 1 year for a tape out.

http://inst.cs.berkeley.edu/~cs150/sp09/Lecture/lec29-async....


I did mention that async design was hard and bug prone, and that was not my point at all.

My point was that clocked design is such an orthodoxy that you can almost not find any information and more specifically methodologies to attack asynchronous H/W design problems.

To use an analogy, the vast majority of electronics is digital these days, because analog is darn hard and digital brings in discipline and guarantees but it doesn't mean the subject of analog electronics isn't explored just because it's hard.


It's not an orthodoxy, you just can't do anything real in a reasonable timeframe for reasonable manpower that a sync CMOS design can't do better (in general). So there's nobody to discover all the secrets of async methodology.

That is, outside of things like SERDES links, which are a tiny dark cabal that holds their secrets close :)


Indeed, analog circuits are well researched topic. However, I think it's important to understand that the transistor CPU is a fairly new topic. In fact, the whole of computer science is a relatively new topic.

As for research there's plenty. I linked slides from an undergraduate course because you wrote that your background leaned more towards computer science. Finding research in this topic is as easy as doing a search through IEEE.

http://asyncsymposium.org/async/Welcome.html http://www.async.caltech.edu/publications.html


Analog electronics is basically only used nowadays when digital literally cannot accomplish the task (RF, SERDES, ADC/DAC). I haven't seen any asynchronous designs that can do things sync designs can't do, just that claim to be better by some metric. Until now, process improvements have made these benefits of async design not worth the additional design effort.

I agree that async design is worth researching, but there is a good reason that undergrad level digital design assumes sync.

The game may change going forward since Dennard scaling is dead. 'deepnotderp probably has some interesting insights on this topic, perhaps he will chime in.


Here's one:

https://www.researchgate.net/profile/Jordi_Cortadella/public...

Here's a RTL-level approach:

https://www.ndsu.edu/pubweb/~scotsmit/uncle_async_12.pdf

Here's one for the hybrid Globally-Asynchronous, Locally-Synchronous model:

https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/200502...

On low-level details, here's two on asynchronous synthesis:

https://ieeexplore.ieee.org/document/1327634/

http://orbit.dtu.dk/files/3459606/Behavioral%20Synthesis%20N...

Synthesis and verification together

http://csl.yale.edu/~rajit/ps/invsynth.pdf

And here's a few designs from my collection:

http://vlsi.cornell.edu/~rajit/ps/dram.pdf

https://escholarship.org/uc/item/23n9d4pj

https://pdfs.semanticscholar.org/2c07/66eda008b59750eb1e789f...

Note: That's an earlier one I'm including since older nodes are still available via MOSIS and Europractice. Any patents are probably expired. It got first pass in silicon, too.

And I just randomly found this paper that has an optimization algorithm to reduce the area and latency disadvantages of asynchronous circuits:

http://www.cs.columbia.edu/~cjeong/papers/aspdac07.pdf


Is there a RISC-V Async CPU?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: