Hacker News new | past | comments | ask | show | jobs | submit login

> Educating a human takes many years, and we can't easily transfer knowledge between humans. There's a lot of room for improvement. Transferring knowledge between machines could be much more efficient.

You're assuming that it's possible to do efficient knowledge transfer without losing general intelligence.

Maybe in order to have general intelligence about something, you need to learn it from scratch, over time, rather than just "copying" the information.

Another problem is that you're assuming that you can build a powerful enough computer, and still retain the possibility to have them be programmable. That is, that you have easily programmable memory cells within the computer which can change its behavior. But to have easily programmable memory cells, they need space and infrastructure to access them, and they will interfere with critical timing paths. That's why if you build a special purpose ASIC for a task, it will always be faster than a CPU or GPU.

Maybe all the things we consider useless for intelligence, like playing, sleeping, exercising etc., is actually necessary. We've certainly started to see that those who focus too much on studying by cramming don't necessarily become smarter.

You can put me in the "skeptics" camp when it comes to superhuman intelligence. It may be possible, and it's fun to discuss, but it seems to me that the ones who fuzz about it are making an incredible amount of assumptions.




Okay, but CPU's, GPU's, and ASICs can all be manufactured. And I don't see anyone building a computer whose software can't be copied unless it's for artificial reasons like DRM.

So it seems like the question is whether computers as we know them can do general intelligence at all? If they can, it will be easy to clone.

If they can't, then it would imply that general intelligence is something more than information processing as we know it; perhaps some kind of dualism?


How do you take a subset of knowledge from neural net A such as cats have fir and merge it with neural net B?

It's not a software or hardware problem it's a data problem as it's not obvious what part of neural net A encodes fir and what part encodes cat's and how you map that to B's encoding of fir and cats while connecting them.

Now, AI is not necessarily going to be neural net's, but it's also not necessarily understandable what all the little bit do, just that they work.


That seems like a problem with software composition? If you just want to clone a neural network, you can do that without knowing how it works. In git terms, we have fork but not merge.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: