For one, input and output size has to be fixed. All these NNs doing image transformations or recognition only work on fixed-size images. How would you sort a set of integers of arbitrary size using a neural network? What does "solve with a NN" even mean in that context?
Another problems/limitation I can think of is that in NNs you don't have state. The NN can't push something on a stack, and then iterate. How do you divide and conquer using NNs?
Are NNs Turing complete? I don't see how they possibly could be.
Input and output sizes don't have to be fixed. E.g. speech recognition doesn't work with fixed sized inputs. Natural language processing deals with many different length sequences. seq2seq networks are explicitly designed to deal with problems that have variable length inputs and outputs that are also variable in length and different from the input.
> $One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols.$
To me it sounds like they use an RNN to learn a hash function.
Another problems/limitation I can think of is that in NNs you don't have state. The NN can't push something on a stack, and then iterate. How do you divide and conquer using NNs?
Are NNs Turing complete? I don't see how they possibly could be.