I’ve never liked that term “sub-symbolic”. It implies that there is something at a deeper level than what a Turing machine can compute (i.e., via the manipulation of strings of symbols), and as far as we can tell, there’s no evidence for that. It might be true, but even a quantum computer can be simulated on a classical computer. And of course neural networks run on classic computers too.
Yeah, I know that’s not what “symbol” is really referring to here in this context but I just don’t like what the semantics of the word suggests about neural networks — that they are somehow a halting oracle or hypercomputation — which they’re obviously not.
It's not the name I would have chosen either (probably) but I wasn't around when those decisions were being made and nobody asked me for my opinion. So I just roll with it. What can ya do?
Oh for sure! Wasn’t critiquing your comment at all. I’ve seen the term a lot lately and it just made me wonder how much the industry is using it as a misleading hype factor. E.g., LLMs are “better” than Turing machines because they are operating at a level “below” Turing machines even though the comparison doesn’t make sense, as symbolic computation isn’t referring to the symbol-manipulating nature of Turing machines in the first place.
Yeah, I know that’s not what “symbol” is really referring to here in this context but I just don’t like what the semantics of the word suggests about neural networks — that they are somehow a halting oracle or hypercomputation — which they’re obviously not.