Hacker News new | past | comments | ask | show | jobs | submit login

Hmmm... From my experience writing Artificial Life simulators in the 90s, there is so much that is dependent on the parameters that the simulation coders write into the models. Yes, if you build a Genetic Algorithm where you start with a photo-sensitive cell, and then you reward agents that are able to navigate a maze, you're going to end up "evolving" agents that use a bunch of photo-sensitive cells like an eye. Has that really told you something about evolution, about life, or about the simulation that you set up?



I do think it's the third, _but_ I do also think it's different than the kitchen-sink organism simulations that really were just watching a lot of shallowly-programmed behaviors interact with each other. I think of it more of a model than a simiulation - a model that tries to explain a specific real-life phenomenon in as focused and parsimonious way as possible.

Here, the takeaway is that the emergence of two different types of eye – compound and camera-like eyes – can be modelled by a set of 3 specific tasks, in combination with a minimal set of anatomical knobs and switches. Then it might actually be _informative_ to compare and contrast the clear evidence from the model, and see how these explanations compare to the less conclusive ones we can draw from the methods of evo-bio.

(A good analogy would be to look at how gates, latches, and clocks can alone account for the "emergence" of modern superscalar microarchitectures, without having to resort to modelling the analog madness of pushing high frequencies through physical circuits, for example.)


About the latter, obviously. You can't just test it as in physics, so that's all you can have. The proper question (I guess) is, is that knowledge completely useless or may it help with something down the line.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: