Subjectively the "getting lost" feels totally different than human conversations. Once there is something bad in the context it seems almost impossible to get back on track. All subsequent responses become get a lot worse and it starts contradicting itself. It is possible that with more training this problem can be improved, but what is interesting to me isn't it's worse than humans in this way but that this sort of difficulty scales differently than it does in humans. I would love to get some more objective descriptions of these subjective notions.
Contradictions are normal. Humans make them all the time. They're even easy to induce, due to the simplistic nature of our communication (lots of ambiguities, semantic disputes, etc).
Any sufficiently large amount of information exchange could be interpreted as computational if you see it as separated parts. It doesn't mean that it is intrinsically computational.
Seeing human interactions as computer-like is a side effect of our most recent shiny toy. In the last century, people saw everything as gears and pulleys. All of these perspectives are essentially the same reductionist thinking, recycled over and over again.
We've seen men promising that they would build a gear-man, resurrect the dead with electricity, and all sorts of (now) crazy talk. People believed it for some time.
If data integrity is assured, and thus there is no change in the data to store/transfer, then that's the opposite of computationally transforming the data?
How do we see robot and AI and helping interactions in film and tv and games?
A curated list of films for consideration:
Mary Shelley's "Frankenstein" or "The Modern Prometheus" (1818), Metropolis (1927),
I\, Robot (1940-1950; Three Laws of Robotics, robopsychology),
Macy Conferences (1941-1960; Cybernetics),
Tobor the Great (1954), Here Comes Tobor (1956),
Jetsons' maid's name: Rosie (1962),
Lost in Space (1965),
2001: A Space Odyssey (1968),
THX 1138 (1971),
Star Wars (1977),
Terminator (1984),
Driving Miss Daisy (1989),
Edward Scissorhands (1990), Flubber (1997, 1961), Futurama (TV, 1999-), Star Wars: Phantom Menace (1999), The Iron Giant (1999), Bicentennial Man (1999), A.I. Artificial Intelligence (2001), Minority Report (2003), I\, Robot (2004), Team America: World Police (2004), Wall-E (2008), Iron Man (2008), Eagle Eye (2008), Moon (2009), Surrogates (2009), Tron: Legacy (2010), Hugo (2011), Django Unchained (2012), Her (2013), Transcendence (2014), Chappie (2015), Tomorrowland (2015), The Wild Robot (2016, 2024), Ghost in the Shell (2017),
~AI vehicle: Herbie, The Love Bug (1968-),
Knight Rider (TV, 1982-1986), Thunder in Paradise (TV, 1993-95), Heat Vision and Jack (1999), Transformers (2007), Bumblebee (2018)
Games: Portal (2007), LEGO Bricktales (2022), While True: learn() (2018), "NPC" Non-Player Character
What you're talking about has absolutely nothing to do with the paper. It's not about jumps in context. It's about LLMs being biased towards producing a complete answer on first try, even when there isn't even enough information. When you provide them with additional information, they will stick with the originally wrong answer. This means that you need to frontload all information in the first prompt and if the LLM messes up, you will have to start from scratch. You can't do that with a human at all. There is no such thing as "single turn conversation" with humans. You can't reset the human to a past state.
I have experienced that in person many, many times. Jumps in context that seem easy for one person to follow, but very hard for others.
So, assuming the paper is legit (arxiv, you never know...), its more like something that could be improved than a difference from human beings.