I’m curious what’s dangerous about it? How do your square the inability to play tic-tac toe or do value comparisons correctly with “we should compare this to a humans reasoning”
If it can’t do things like basic value comparison correctly what business do we have saying it “reasons like a human?”
The danger is when LLMs start to outperform humans on many tasks (which they already have), claiming that LLMs are stochastic parrots could be seen to imply that less intelligent people are also no better than stochastic parrots.
Who is claiming that implications inevitability?Shutting down a valid line of discussion because of someone deciding to make a fallacious analogy seems like a proposition that would essentially stop all scientific discussion of intelligence more broadly. Also a great argument for limiting free speech/scientific discussion in general. Thoughts are not inherently dangerous unless acted upon in a dangerous way and supposing that some are so dangerous that we should simply not speak of them seems like an action that should be considered more thoroughly beyond “someone might do something rash”