Hacker News new | past | comments | ask | show | jobs | submit login

I'm not sure we want to work with "the most general data structure possible". We can use a Hopfield-like network where every neuron is connected to every other neuron and to itself. It probably won't be very useful. NN design have been moving from more general to less general architectures.



It is implied that at least in theory, the promise is to use them with the similar total complexity (like numberr of parameters and amount of required calculation), in which case yes we do want the most general data structure possible. If we can have a more general data structure that provides similar performance characteristics, it is easier to apply, to debug, to understand, and it likely means that we have found something more fundamental about the underlying world in general.


I think you're conflating "generality" of NN architecture and "generality" of the inductive bias the NN models.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: