I often have read "I wonder if it _really_ understands or is just regurgitating ${something}" about AIs like this and I wonder what the difference really is? Ability to explain why it's one way and not the other? Extremely low frequency of errors? If it doesn't really understand lisp, maybe we'd expect to see a fairly high rate of incorrect interpretations?
Presumably most _actual_ compilers don't produce the correct solution 100% of the time (i.e. they have bugs), but I think it's reasonable to say that the compiler _understands_ ${programming language}. Maybe the difference between "understanding" and "just memorizing answers" is more subtle than often portrayed?
When I ask the question, what I really mean is ``is there a mechanical structure that guarantees the correct output?'' For example, we can train neural networks to perform functions such as "and", "xor", etc., and convince ourselves the network has "really learned" what it means to calculate the function.
Is that true for interpreting programming languages? If so, a bug isn't just ``I haven't seen a similar enough example''. It reflects a deeper mistake that will likely occur again.
Cool post -- wondering, does the final example of defining the y combinator and using it really show much? `(factorial 5)` probably would run without defining the y combinator first -- would be interested to see how it handled some novel function that isn't probably directly memorized.
Presumably most _actual_ compilers don't produce the correct solution 100% of the time (i.e. they have bugs), but I think it's reasonable to say that the compiler _understands_ ${programming language}. Maybe the difference between "understanding" and "just memorizing answers" is more subtle than often portrayed?