Hacker News new | past | comments | ask | show | jobs | submit login

Nice try, but still a miss. Building assumptions into NN won’t work for anything complex like ladders and keys which according to this author are built in to human brains? Please. Yes, NN are still in their infancy but they are clearly foundational to general intelligence. They will probably require many additional discoveries about how they need to be connected and maybe even a few updates to the model of the neuron but they aren’t going away. Secondly, of course a language model is going to parrot back the data it is trained on, that’s a major goal ie given a giant dataset answer these questions. It’s also how a lot of casual human conversation works, we generally just parrot back things we’ve read or heard. The interesting parts are the new syntheses that human brains come up with but even those are standing on the shoulders of the dataset so to speak. You’ll never get a truly neutral view of the data, I don’t even know what such a thing would look like maybe just pure ignorance of the data could be considered neutral? Biases aka heuristics are a a core part of learned intelligence, they can of course be flawed or entirely incorrect for a given environment but they serve a purpose and you can’t do away with them or the ability to form them based on observed data. You can optimize them for specific goals but you can’t get by with just one.



> Please. Yes, NN are still in their infancy but they are clearly foundational to general intelligence.

This calls for a big fat "citation needed". What we call neural networks have very little to do with actual neurons, so I fail to see how this is supposed to be "clear". And it's in its infancy since the sixties, really?


Technically, the perceptron was developed in the 50s, and it's kind of hilarious to read the original press release which claimed it was going to be conscious of its own existence.


I’d say it’s clear since we continue to make significant progress in replicating the behaviour of biological NNs with the main limitation being the amount of parallelism we can throw at it. If you read closely I stated that our model of the neuron may require modification but the basic idea is correct. Our implementation of artificial NN is in its infancy, we still don’t fully understand how to structure these networks and the way we feed them data is nowhere close to how biological NNs get theirs from the environment but the current SOTA is necessary because we don’t know how to build something capable of operating in the same way. We still have a lot to learn. We’re competing with hundreds of millions of years of evolutionary exploration using our fairly rudimentary understanding of biology. 60 years is nothing especially considering progress is not inevitable and this area of science has major boom and bust cycles where not a lot gets done.


What about electric cars, since they've been around for 180 years? Is it wrong to say that EV tech is in its infancy? We're seeing significant innovation in electric vehicles and there's so much room for it to continue improving.

Technology goes through periods of explosive innovation as well as periods of very little growth. NN's were dead until they weren't, and now they are having quite the renaissance ever since 2012. Currently they seem to be getting us closest to general intelligence and I'm excited to see where they go from here and if anything else comes along to supplant it's place on the throne.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: