Hacker News new | past | comments | ask | show | jobs | submit login
On the Origin of Circuits (2007) (damninteresting.com)
52 points by chwolfe on July 14, 2015 | hide | past | web | favorite | 12 comments



Does anyone know of any projects where they use a different set of primitives? In this article it's logic gates and real world electromagnetic phenomena, in neural nets it's neurons....

Have there ever been experiments that tried using 3D primitives or even language primitives from programming languages? There's almost an unlimited amount of possible input primitives, yet the industry seems to only focus on neural nets.


> a different set of primitives

Lots. For instance, in 1985 I used evolution to breed tic-tac-toe programs (in assembly on simulated game CPUs).

Koza used many different approaches, including evolving Lisp programs (the tree structure works well).

For most problem solving purposes simulated annealing works as well or better, with vastly less computation, which is perhaps the main reason that things like genetic programming have stayed niche rather than taking over the world.

Koza was also the one that first bred simulated creatures that learned to ambulate in a 3d environment, back in the late 80s, on Thinking Machines -- quite impressive in that era.

> the industry seems to only focus on neural nets.

There's more payoff per unit of computation. The right tool for the right job, and all that.


This project strikes me as similar. Evolve a computational network and train it in the real world:

http://www.ouroboros.org/evo_gaits.html


I dabble in this stuff on occasion and enjoy reading about it. My first favorite was Danny Hillis' story [1] about designing sorting algorithms using genetic algorithms on a massively parallel machine. He also showed the power of co-evolution: both solutions and tests evolve. Later, I saw the HUMIE awards [2] show off what people have accomplished. Then, there was the nice article [3] on John Koza's Invention Machine. Plenty to get a person interested in the stuff.

Yet, most of it happens in academia and paid industry with a lot of good information not easily accessible to non-experts. There's not as much momentum in developing easy to use tools and frameworks for most use cases like we see with, say, web applications. This limits the field to people willing to put in significant time in understanding the subject, the methods, their strengths/limitations, and the various implementations out there.

Nonetheless, I at least enjoy reading the abstracts and know I could contract a specialist for a certain applications.

[1] http://kk.org/mt-files/outofcontrol/ch15-d.html

[2] http://www.genetic-programming.org/combined.html

[3] http://www.popsci.com/scitech/article/2006-04/john-koza-has-...


The problem with "evolutionary" circuits is that 99.99% of them don't work very well (dead organisms).

And that .01% often optimizes some weird corner case that kinda, sorta works but isn't really a "solution" (sickle cell anemia).

And in the .000001% case generates something genuinely useful (vision).

Gee, sort of like actual evolution, no?


That's my personal experience. With a single pass/fail condition most of the population tended to get trapped at the naieve solution - a 50% success rate.

This was the better part of a decade ago during my sophomore year of university, so it's entirely possible I somehow fucked up the backprop that was guiding it. In hindsight, maybe I was pruning too hard. But my experience was that it's hard to get your nets to develop the complexity to escape the basic naieve cases.


I really hope some commentary develops from HN on this


Here is a popular science link to John Koza's version (http://www.popsci.com/scitech/article/2006-04/john-koza-has-...) which came out a year before this article.

Genetic algorithms produce really interesting results, but like Thompson they also tend to produce irreproducible results, or specifically results that work in the environment where they were evolved but no where else.

That makes the output often less than useful and very challenging to certify. Some of the most successful work I know of has been done with antenna design.


Thanks. What if you were to try to evolve a program that solves a problem- but tests each evolution simultaneously on multiple chips? Instead of one chip, it must work on 100 chips in order to pass. To sort of cancel out the idiosyncrasies of any one chip.

I imagine you'd still have problems but might be mitigated a bit.


Adrian ended up modifying his setup later so that rather than running on real hardware, the testbed ran in a hardware simulator that only used the documented properties of the chips. This made the final results not quite as concise (I recall something like 20% more gates used), but they worked on all equivalent chips. I looked for more stuff by him, but he ended up going into evolving database schemas or something like that and got out of the hardware game altogether. Kinda disappointing, really.


Certainly a two stage fitness process would work, or as mentioned using additional controls on the simulation to avoid idiosyncrasies. It isn't that hard to build a small genetic 'evolver' (my favorites are some of the 'walking' ones which race blob things). But you can evolve all sorts of complex things if you are willing to wait long enough and don't care if you understand the result or not. I built one to evolve a response packet to ssh clients to kill them (I got tired of botnets trying to brute force passwords on a server I ran for a while), all you need is a fitness function, a mutate function, and time :-).


It's been posted a number of times over the years but only ever received one comment before this post: https://news.ycombinator.com/item?id=8890167




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: