Intel acquired Fulcrum and has not had a new product. One can speculate that they were acquired in part for their experience and tool to design asynchronous pipelines.
In the DSP world Octasic makes DSPs that use asynchronous desisns:
Mine start at 50 microsecond intervals. I've worked on stuff with shorter and longer intervals. Sometimes we have lists of tasks that need to run at different rates, so scheduling becomes a real pain. Welcome to the world of real time embedded software in high performance systems. The same thing applies, we make sure the worst case execution time is within the allowed intervals and use a master clock to sync everything up.
This in contrast to non-real-time systems which tend to just freeze for random periods of time (sometimes seconds or even minutes) without any apparent cause. That's something that really puzzles me about todays software+hardware. In theory it should all be faster than ever but in practice I spend as much or even more time waiting for my computers than I ever did in the past.
Maybe I'm just more impatient but I don't believe that's the reason here.
Real time should be the norm, not the exception, just like encrypted communications should be the norm, not the exception.
Computers should respond without noticeable latency to user input at all times.
Responsive interfaces allow users to develop muscle memory.
I used to type in complicated sequences of commands into my commodore 64 to perform common actions. (wow, I didn't know I wanted a $HOME with some scripts in it. Now I know!)
When I'd make a typo in one of those sequences, it would commonly be quicker for me to reset the machine and start from the top.
If I performed the same action twice (with a reset inbetween) and got a different result, I could logically conclude that I had a hardware problem. (Not every HW problem is permanent. Heat and grounding problems can both be fixed, unless some threshold is crossed...)
Anyway, I figure that Wintel denied grandma that kind of computer because they learned from the hobbyists that neophyte users with quality hardware+software will exceed the creators in skill in less time than it takes to design the next gen rig--and a couple genius users will start building their own out of impatience!
There's no profit in quality hardware+software combinations.
I'm scratching it up to "it's the future, and always will be ..."
Now that Moore's Law is on its way out, people can actually try new things, and discover what pays off and what does not.
What may have been hard to implement 30-40 years ago may be easier now with current technology. Some of these could definitely supplement existing binary/boolean silicon in certain domains if not replace it, like using actual brains in AI-as-a-Service, for image recognition and so on,
The upsides are real but other avenues of development may still have higher payoff vs cost (effort).
Warning: Slightly commercial in nature. But some good information about how it works starting on page 4. Worth reading from there.
Archive.org has some of their old FLEET architecture papers and slide decks: 
I just figured that you could redesign common ICs so that they had a new wire akin to the "carry" bit. I called it the 'done' wire, and I figured you could just tie it to the CLK of the next IC. Ya know? So 'doneness' would propagate across the surface of the motherboard (or SoC) in different ways depending on the operation it was performing. Rather than the CLK signal, which is broadcast to all points...
(I know that my idea is half baked and my description is worse. I'm glad I found this PDF!)
I knew the big advantage would be power savings. I called the idea 'slow computing', and I envisioned an 8-bit style machine that would run on solar or a hand crank and be able to pause mid calculation until enough power was available... Just like a old capacitor-based flash camera will be able to flash more frequently when you have fresh batteries in it.
You'd just wire the power system up with the logic. Suppose an adder fires a "done" at some other IC. Now, put your power system inline, like MiTM... When it gets the "done", it charges that capacitor (a very small one? :) ) and only when enough power is available does it propagate the "done". ...Maybe the "done" powers the next IC. I dunno.
As I said, half baked. Glad to find out that I'm not the only one that dreamed of 'clockless', though!
There are several options. One is to simply add a delay element to each circuit that is matched to the circuit's delay. Another is to use a circuit-level handshaking protocol, similar to that used in TCP.
It's not an easy thing to tackle and leads to performance loss in the long run relative to a synchronous design.
I'm for approaches that may be superior overall.
Didn't understand it though.