The odd thing is that they don't compute timing in RL, but claim that somehow TNS and WNS improved. Does anyone believe this? With five circuits and three wins, the results are a coin toss.
It's actually not clear who was bullied. The two researchers ganged up on Chatterjee and got him fired because he used the word "fraud" - wrongful termination of a whistleblower. Only recently Google settled with Chatterjee for an undisclosed amount.
Good question. It's not just ibm14, but everything people outside Google tried shows that RL is much worse than prior methods. NVDLA, BlackParrot, etc. There is a strong possibility that Google pre-trained RL on certain TPU designs then tested in them, and submitted to Nature.
Oh, man... this is the same old stuff from the 2023 Anna Goldie statement (is this Anna Goldie's comment?). This was all addressed by Kahng in 2023 - no valid criticisms. Where do I start?
Kahng's ISPD 2023 paper is not in dispute - no established experts objected to it. The Nature paper is in dispute. Dozens of experts objected to it: Kahng, Cheng, Markov, Madden, Lienig, Swartz objected publically.
The fact that Kahng's paper was invited doesn't mean it wasn't peer reviewed. I checked with ISPD chairs in 2023 - Kahng's paper was thoroughly reviewed and went through multiple rounds of comments. Do you accept it now? Would you accept peer-reviewed versions of other papers?
Kahng is the most prominent active researcher in this field. If anyone knows this stuff, it's Kahng. There were also five other authors in that paper, including another celebrated professor, Cheng.
The pre-training thing was disclaimed in the Google release. No code, data or instructions for pretraining were given by Google for years. The instructions said clearly: you can get results comparable to Nature without pre-training.
The "much older technology" is also a bogus issue because the HPWL scales linearly and is reported by all commercial tools. Rectangles are rectangles. This is textbook material. But Kahng etc al prepared some very fresh examples, including NVDLA, with two recent technologies. Guess what, RL did poorly on those. Are you accepting this result?
The bit about financial incentives and open-source is blatantly bogus, as Kahng leads OpenROAD - the main open-source EDA framework. He is not employed by any EDA companies. It is Google who has huge incentives here, see Demis Hassabis tweet "our chips are so good...".
The "Stronger Baselines" matched compute resources exactly. Kahng and his coauthors performed fair comparisons between annealing and RL, giving the same resources to each. Giving greater resources is unlikely to change results. This was thoroughly addressed in Kahng's FAQ - if you only could read that.
The resources used by Google were huge. Cadence tools in Kahng's paper ran hundreds times faster and produced better results. That is as conclusive as it gets.
It doesn't take a Ph.D. to understand fair comparisons.
For AlphaChip, pre-training is just training. You train, and save the weights in between. This has always been supported by the Google's open-source repository. I've read Kahng's FAQ, and he fails to address this, which is unsurprising, because there's simply no excuse for cutting out pre-training for a learning-based method. In his setup, every time AlphaChip sees a new chip, he re-randomizes the weights and makes it learn from scratch. This is obviously a terrible move.
HPWL (half-perimeter wirelength) is an approximation of wirelength, which is only one component of the chip floorplanning objective function. It is relatively easy to crunch all the components together and optimize HPWL --- minimizing actual wirelength while avoiding congestion issues is much harder.
Simulated annealing is good at quickly converging on a bad solution to the problem, with relatively little compute. So what? We aren't compute-limited here. Chip design is a lengthy, expensive process where even a few-percent wirelength reduction can be worth millions of dollars. What matters is the end result, and ML has SA beat.
(As for conflict of interest, my understanding is Cadence has been funding Kahng's lab for years, and Markov's LinkedIn says he works for Synopsis. Meanwhile, Google has released a free, open-source tool.)
It's not that one needs an excuse. The Google CT repo said clearly you don't need to pretrain. "supported" usually includes at least an illustration, some scripts to get it going - no such thing there before Kahng's paper. Pre-trained was not recommended and was not supported.
Everything optimized in Nature RL is an approximation. HPWL is where you start, and RL uses it in the objective function too. As shown in "Stronger Baselines", RL loses a lot by HPWL - so much that nothing else can save it. If your wires are very long, you need routing tracks to route them, and you end up with congestion too.
SA consistently produces better solutions than RL for various time budgets. That's what matters. Both papers have shown that SA produces competent solutions. You give SA more time, you get better solutions. In a fair comparison, you give equal budgets to SA and RL. RL loses. This was confirmed using Google's RL code and two independent SA implementations, on many circuits. Very definitively. No, ML did not have SA beat - please read the papers.
Cadence hasn't funded Kahng for a long time. In fact, Google funded Kahng more recently, so he has all the incentives to support Google. Markov's LinkedIn page says he worked at Google before. Even Chatterjee, of all people, worked at Google.
Google's open-source tool is a head fake, it's practically unusable.
Update: I'll respond to the next comment here since there's no Reply button.
1. The Nature paper said one thing, the code did something else, as we've discovered. The RL method does some training as it goes. So, pre-training is not the same as training. Hence "pre". Another problem with pretraining in Google work is data contamination - we can't compare test and training data. The Google folks admitted to training and testing on different versions of the same design. That's bad. Rejection-level bad.
2. HPWL is indeed a nice simple objective. So nice that Jeff Dean's recent talks use it. It is chip design. All commercial circuit placers without exception optimize it and report it. All EDA publications report it. Google's RL optimized HPWL + density + congestion
3. This shows you aren't familiar with EDA. Simulated Annealing was the king of placement from mid 1980s to mid 1990s. Most chips were placed by SA. But you don't have to go far - as I recall, the Nature paper says they used SA to postprocess macro placements.
SA can indeed find mediocre solutions quickly, but keeps on improving them, just like RL. Perhaps, you aren't familiar with SA. I am. There are provable results showing SA finds optimal solution if given enough time. Not for RL.
The Nature paper describes the importance of pre-training repeatedly. The ability to learn from experience is the whole point of the method. Pre-training is just training and saving the weights -- this is ML 101.
I'm glad you agree that HPWL is a proxy metric. Optimizing HPWL is a fun applied math puzzle, but it's not chip design.
I am unaware of a single instance of someone using SA to generate real-world, usable macro layouts that were actually taped out, much less for modern chip design, in part due to SA's struggles to manage congestion, resulting in unusable layouts. SA converges quickly to a bad solution, but this is of little practical value.
1. The Nature paper said one thing, the code did something else, as we've discovered. The RL method does some training as it goes. So, pre-training is not the same as training. Hence "pre". Another problem with pretraining in Google work is data contamination - we can't compare test and training data. The Google folks admitted to training and testing on different versions of the same design. That's bad. Rejection-level bad.
2. HPWL is indeed a nice simple objective. So nice that Jeff Dean's recent talks use it. It is chip design. All commercial circuit placers without exception optimize it and report it. All EDA publications report it. Google's RL optimized HPWL + density + congestion
3. This shows you aren't familiar with EDA. Simulated Annealing was the king of placement from mid 1980s to mid 1990s. Most chips were placed by SA. But you don't have to go far - as I recall, the Nature paper says they used SA to postprocess macro placements.
SA can indeed find mediocre solutions quickly, but keeps on improving them, just like RL. Perhaps, you aren't familiar with SA. I am. There are provable results showing SA finds optimal solution if given enough time. Not for RL.
SA and HPWL are most definitely used as of today for the chips that power the GPUs used for "ML 101". But frankly this has the same value as saying "some sort algorithm is used somewhere" -- they're well entrenched basics of the field. To claim that SA produces "bad congestion" is like claiming that using steel pans produces bad cooking -- needs a shitton of context and qualification since you cannot generalize this way.
As someone unfamiliar with the topic but trying to piece together information, I have to admit that they do a better job at convincing me of the potential of reinforcement in chip design.
As with most, if not all, applications of reinforcement learning, there are always traditional algorithms that outperform it. But that does not mean that the approach lacks promise, or is at least interesting.
Sure, the paper might have polished up some results, but if that is the case, it is better addressed through the appropriate channels. Engaging in public criticism does not build too much trust, at least not with this curious observer.
Hm... I also had to piece things together and agree that Google PR is pretty slick. The approach had promise 3-4 years ago, but the science seems clear now. Google is avoiding tests on shared chip designs but claims a breakthrough. There is no breakthrough as everyone is still using Cadence or Synopsys software tools.
Maybe you can make RL work for chip design at some point, but if the paper "polished up some results", why is it still getting any respect? You are right about "appropriate channels", that's what Chatterjee and Kahng tried, but Chatterjee was fired by Google as a whistleblower (red flag!) while Kahng is getting flak even these comments (another red flag!). Where would you look next as an independent observer?
The GP would have had to appeal only to the expert’s opinion, with no actual evidence, but the GP actually gave a lot of evidence to the expertise of the researcher in the form of peer reviewed papers and other links. That’s not an appeal to authority at all.