Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hi, thanks for the feedback.

We think this approach has the potential to develop networks useful in applications where we need to train the weights really quickly for the network to adapt to a given task.

We put in the discussion section that the ability to quickly fine-tune weights might find uses in few-shot learning and continual lifelong learning where agents continually acquire, fine-tune, and transfer skills throughout their lifespan.

The question is, which "super task" do you optimize your WANN for so that it is useful for many subtasks that you did not optimize for to begin with? We think perhaps optimizing a WANN to be good at exploration and curiosity might be a good start for it to develop good priors that are useful for new unseen tasks in its environment. There's a lot more work that can be done from here ...




If I understand correctly, the idea isn’t that you’d only use the architecture untrained, but that its goal is better architecture discovery for given tasks or groups of tasks.

Is that accurate?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: