Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Even with that setup you end up with overfitting after a point. I.e. instead of getting something that works on every FPGA of that type, you start getting things that work on the specific FPGAs you provide. The same thing happens in machine learning for things like recognizing photos: after a while your algorithm stops recognizing cars and starts just recognizing those specific photos of cars.

Ideally, you want to take a bunch of FPGAs, pull out a random subsample of them only for acceptance testing, and stop evolving the circuit when the performance on the acceptance testing subset starts getting worse.



Even if it worked on the specific FPGAs you provide, it might not work reliably in different temperatures like the one in the article, or if the noise is slightly further away, etc.

There is no way to stop overfitting because it's difficult if not impossible to test it in every possible environment we want it to work under.



That's not really relevant to this. That is for selecting hyper-parameters for statistical models.


I know the article doesn't mention it, but you can use the exact same techniques for preventing overfitting in machine learning.


That's basically what I said, and this is optimization not machine learning. The problem is the genetic algorithm fits the the specifics of the FPGA and the environment it is optimized in, and doesn't work reliably in other environments or FPGAs.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: