I can see how LLMs contribute to raise the standard in that field. For example, surveying related research. Also, maybe in the not too distant future, reproducing (some) of the results.
Writing consists of iterated re-writing (to me, anyways), i.e. better and better ways to express content 1. correctly, 2. clearly and 3. space-economically.
By writing it down (yourself) you understand what claims each piece of related work discussed has made (and can realistically make - as there sometims are inflationary lists of claims in papers), and this helps you formulate your own claim as it relates to them (new task, novel method for known task, like older method but works better, nearly as good as a past method but runs faster etc.).
If you outsource it to a machine you no longer see it through, and the result will be poor unless you are a very bad writer.
I can, however, see a role for LLMs in an electronic "learn how to write better" tutoring system.
Pretty much yes. Critical analysis is a necessary skill that needs practice. It's also necessary to be aware of the intricacies of work in one's own topic area, defined narrowly, to clearly communicate how one's own methods are similar/different to others' methods.
If I ask for a task, and the output is not the one expected: I ask for the motivation that lead to the bad decisions. Then, ChatGPT proceeds to retry the task "incorporating" my feedback, not answering my question!!
> The "off by one" predilection of LLMs is going to lead to this massive erosion of trust in whatever "Truth" is supposed to be, and it's terrifying and going to make for a bumpy couple of years.
This sounds like searching for truth is a bad thing, but instead is what has triggered every philosophical enquiry in history.
I'm quiet bullish, and think that LLMs will lead to a Renaissance in the concept of truth. Similar to what Wittgenstein did, Plato's cavern or late middle age empiricists.
Pretty sure our identify is just that of the actor behind our own actions.
i.e. Our brain models causal relationships, sees the correlation between it's own pre-action thoughts and the following action, and therefore models itself as an actor/causal agent responsible for these thoughts and actions.
> A bright, capable mind of 40 with an imagination factor of .75 may only have the cumulative real-world experience of a 10-year-old.
While provocative, that argument does not take into account the development of the brain. Processing early experiences are far different from the ones when the brain is fully developed. This includes the storage of memories (knowledge).
The fact that our identities are a path integral through a unique 4 dimensional spacetime curve does not undermine the utility of first-order characterizations of the resulting value. We do it all the time: where are you from? When were you born? What did you study? What's your favorite ice cream flavor? I am simply adding, and characterizing, an additional factor: how often do you dream? None of the answers to these questions tell the whole story of a person, but they are useful nevertheless.
More generally there’s graph neural networks, for instance, but not you’re including many dynamic networks that are not open-ended or evolvable. The idea is to identify common dynamics and add constraints on the types of networks that are included to find general principles within that class. Kisen the constraints, you make the class too broad and can’t identify common principles.
The kind of apps that will be built in the next 5 years,are nowhere near what we have today.
Developers will need to update their skillset, though.