This is a limiting perspective inherently pessimistic about LLMs.
The best NLP interfaces will be asking questions to the users, in order to figure out what their real problem is. This is similar to what teachers and therapists do. It is not a lazy interface, but a natural one. The chatbot will step the user through a decision tree in situations where the user doesn't know how to ask questions or frame the problem.
Decision trees are inherently limited on the different inputs it can take from the end user (yes/no etc.). The hope here, as I understand it, is to take free-form input from the user and map it back to one of the branches of the decision trees.
We all gather information in order to recognize complex patterns and make decisions.
Some of those decision flows are extremely deep, with complex inputs to determine the ultimate decision.
Skilled teachers and therapists develop pattern recognition skills that allow them to tailor a response to the state of their interlocutor. That process is analogous to a decision tree, but feel free to apply another word to it if you like. Whatever we call it, I think chatbots will be able to do that, and I think that will be a good thing.
Everybody's thinking about chatbots as a black box that answers our questions.
That's not the real value. The real value is to give those bots much large multi-modal prompts about whatever problem we seek to solve, and let the LLM ask us the questions to ferret out features that are not superficially visible, so that it can give us better guidance.
The best NLP interfaces will be asking questions to the users, in order to figure out what their real problem is. This is similar to what teachers and therapists do. It is not a lazy interface, but a natural one. The chatbot will step the user through a decision tree in situations where the user doesn't know how to ask questions or frame the problem.