Hacker News new | past | comments | ask | show | jobs | submit login

For one, input and output size has to be fixed. All these NNs doing image transformations or recognition only work on fixed-size images. How would you sort a set of integers of arbitrary size using a neural network? What does "solve with a NN" even mean in that context?

Another problems/limitation I can think of is that in NNs you don't have state. The NN can't push something on a stack, and then iterate. How do you divide and conquer using NNs?

Are NNs Turing complete? I don't see how they possibly could be.




Input and output sizes don't have to be fixed. E.g. speech recognition doesn't work with fixed sized inputs. Natural language processing deals with many different length sequences. seq2seq networks are explicitly designed to deal with problems that have variable length inputs and outputs that are also variable in length and different from the input.

How would you sort integers? using neural turing machines: https://arxiv.org/abs/1410.5401

NMTs and other memory network architectures have explicit memory as state (including stacks!), indeed any recurrent neural net has state.

Are NNs Turing complete? Yes! http://binds.cs.umass.edu/papers/1992_Siegelmann_COLT.pdf


Interesting, thanks! On https://www.tensorflow.org/tutorials/seq2seq I found a link to https://arxiv.org/abs/1406.1078, which says

> $One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols.$

To me it sounds like they use an RNN to learn a hash function.

Thanks for the NTM link, I'll check it out.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: