> But it is to say that there’s a dimension—the narrative dimension of time—that exists beyond the ALU’s mathematical present. And our brains, because of the directional arrow of neuronal transmission, can think in that dimension.
This is so utterly laughable on every level. It reeks of semi-doctism, mixing up levels of abstraction and ignoring All of the relevant literature.
It fails to realize that, if the thing it is claiming were true, it would simply revolutionize the entire field of computer science: it would not only show that you can build a computer more powerful than a Turing machine, but that we already know how : just add directionally to the tape!
Of course, the fact that transistors themselves are directional (in 3 directions, no less! So they're actually 1 more powerful than neurons!) is completely beyond the professor here.
Probably shows what happens when you read Gabriel Garcia-Marquez[0] instead of Alan Turing when trying to understand how computers work.
> Angus Fletcher is Professor of Story Science at Ohio State’s Project Narrative and the author of Wonderworks: The 25 Most Powerful Inventions in the History of Literature. His peer-reviewed proof that computers cannot read literature was published in January 2021 in the literary journal, Narrative.
I can only laugh. 'Narrative science' indeed.
[0] just want to note here, I do not mean in any way to attack the value of literature. It is a wonderfully enriching experience and it helps shape the human mind, it helps us understand the world of humans outside our experience, and it is an invaluable tool to that end. But it is absolutely obviously not a tool for logical thought, and I would bet most great authors would laugh at this idea as thoroughly as I do.
The "Never" part is not only impossible to prove but also a useless prediction. Just talk about the present and near future instead.
For those time frames I fully agree: there is no AI, it's just form without function. A shallow copy of one part of what writing is about. Just like makeshift wings strapped to your arms is to true flying. An expression of a wish, not even close to a result. Writing a novel involves living, having experiences, reflecting upon them, things that computers are nowhere near for now.
And here I agree: anybody who presents current solutions as writing AI is just marketing bullshit. Having the article stop at this would have been a lot more fair.
But if you also want to talk about the far future: I think it's just as wrong to say "never" as to say "it will surely come". People like to give examples of things we thought impossible but turned out doable. Pure selection/survivorship bias that proves nothing.
Very good points about the futility of discussing about possible/impossible for the distant future.
However, I think that this article in particular is making such a downright stupid argument that it deserves to be called out more explicitly. It is arguing that brains derive computational power (the power to model causation) from the directionality of nerve impulses, nucleus -> synapse and never the other way; and that computers with their binary transistors can only model stateless logic (true or false).
This is worse techno-babble than most Sci-fi. It's wrong on every level, and it's shameful for a university professor to produce such drivel and publish it.
Most of the article claims that causal reasoning is impossible for computers. I would claim that is obviously false because there exist programs that do similar things.
However, I generally agree that computers will have a hard time writing good novels. If computers end up being able to simulate an entire world of humans, it should be possible to do anything humans can (intellectually) so nothing is truly impossible.
The best reason I see that computers will have a hard time writing novels (or any type of art) is because the purpose of a novel in most cases is to convey a human mental state. So in order to do so you have to convey it properly you have to understand human minds well. For humans this is pretty easy, you can use your on experience as a baseline. However, for a computer, it must watch people long enough to understand what is going on inside.
I also had this exact same thought, but after watching GPT-2 be a dungeon master, as long as there's causal and temporal consistency, the AI can generate some surprising and funny story lines that humans normally can not.
Stories don't really have to simulate human beings, just evoke engaging emotions in the reader. Very few fantasy novels actually teach anything meaningful.
> But as natural as causal reasoning feels to us, computers can’t do it. That’s because the syllogistic thought of the computer ALU is composed of mathematical equations, which (as the term “equation” implies) take the form of A equals Z. And unlike the connections made by our neurons, A equals Z is not a one-way route. It can be reversed without changing its meaning: A equals Z means exactly the same as Z equals A, just as 2 + 2 = 4 means precisely the same as 4 = 2 + 2.
What about Prolog ? How is it not a counter-example of this ?
It's a nice touch that the author defends humans, but just like the ones telling that playing Go is too complicated for machines to be learned, I'm pretty convinced that the author is ultimately wrong.
I think we need a new approach however. Most of the ML/AI efforts have been directed into the direction of replicating convinving speech at the word and sentence level. I don't think it's possible to reach the goal of novel writing at any stage of extension of this research — simply because that's also not how humans write novels.
I think what's missing for GPT-3 et al. (as well as for the models in music that I'm closely following) to reach this point is a new kind of adversarial approach that supplements the current bottom-up methods with a top-down mindset.
What we're still missing is the front lobe of deep nets that directs any kind of underlying "high level" syntactic network into the stories and narratives it wants to reach, while supervising setting, pace and form along the way.
Once we found that very breakthrough, humanity is toast from there. Just my 2 cents.
> simply because that's also not how humans write novels.
Planes don't fly by flapping their wings either. It's totally possible we will construct a super capable artificial system that only vaguely resembles the human brain.
Personally I think if we managed to create a 10 times more sophisticated GPT-3 (instead of 2048 tokens it looked back 10k+ tokens) we might get pretty close to a novel writing AI.
1) Generate a rough plot structure. Let's say even just do three acts: motivation -> conflict -> resolution. Now you have a logical throughline and your text generation algorithm doesn't need human level intelligence. This could be randomly generated or some kind of a search from a plot structures database with madlibs inserted into it. Generate 10 characters for the story and insert them into the variables in the plot structure.
2) Use the acts generated above as GPT-3 prompts to fill in the details.
If you start iterating on something like that for a while I bet you could get a decent novel out of it that would fool most people. Maybe you need to define each chapter in the first step, maybe it needs more details in the writing prompts, etc.
I would bet no less than 200, and possibly 2000 as well.
No one is even really trying to teach computers the basics of the world that we take for granted. And it's quite possible that computers will have to be many, many times larger than today to capture the necessary computation that happens in our brains - since a single neuron is itself a computer, and the interconnect seems to be much more complex than an ANN.
Let's remember also that evolution took a good few million years to train our brains (though of course that is not directly comparable to a guided training process).
People do try to teach computers the basics - they just don't get very far. My optimism is kind of based on what people have been successful with eg. spatial stuff with self driving cars, language with GTP3, face recognition, games like go etc. They seem to be gradually working though human abilities. Obvs there's stuff they are not so good at yet also.
I was specifically thinking of language here: GPT-3 does not "understand language" in any real sense. It is simply able to string together words that are more likely to come together in certain orders, but they are utterly meaningless to the network.
Language is a mechanism that we use to express a thought (sometimes externally, but most of the time internally in our own minds). A major part of language is an understanding of the world you are "talking" about. And yet, GPT-3 has no idea about the world, it just knows that the words "the sky is blue" are more likely to appear in a corpus than "the sky is yellow". Even more deeply, it has no notion of what an object is, and how it might differ from an agent, what movement is etc - any of the things that make up our shared human model of the world, that we talk about.
Until we can codify such a model (or automatically learn it), we won't be able to say that we have a model that "understands language". The GPT people are not even trying to look in this direction: they are throwing more text and more neurons at the problem and claiming it will work. They'll probably end up with a model that can essentially do the equivalent of instantly finding the best Shakespeare quote for any situation you throw at it, but still won't be able to reason about that situation in any way, or keep a story straight.
This is already visible in GPT-3: its output (even though most of what you tend to see is carefully hand-picked out of a soup of more meaningless output) meanders around without any point to make: it simply does not have a point to make, and it cannot have a point, by the very nature of its construction.
Essentially, language is words + context, and GPT-X only has access to the words, with none of the context. And furthermore, no one is even working on thinking to add the context yet.
But is that necessary? You could make similar statements about language translation but computers do quite well. Also people don't fully understand how consciousness forms but still write novels.
I would think something like GPT-3 trained on classics and a specific genre would do good enough to pass some kind of book Turing test. Just make it follow Vonnegut’s “shape of stories” https://www.openculture.com/2014/02/kurt-vonnegut-masters-th... And voila!
Just remember the first of the Clarke's three laws[0]
>>> When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
- says that computers cannot handle causal reasoning
Judea Pearl is most famous for showing that it's possible to model causality mathematically. So, if you've heard of him, it seems reasonable you'd know this, and know that computers are good at doing math.
Right. I think he is mixing up a few different but related things.
One is the ability for computers to do causal reasoning at all, such as an if-then statement or Prolog program. The other is the ability to do it in a general purpose way in open-ended domains and tasks.
We have machine learning that can learn in a somewhat open-ended way given enough example inputs and output examples. But that type of system usually isn't good at causal reasoning, because the "understanding" it has of the world is shallow and inaccurate.
There is no evidence that it's impossible for an AI to write a novel. It's just clear that existing systems are not currently close. But we can create toy domains with successes that make me optimistic.
Anyway I think the author has misunderstood some fundamental things and embarrassed himself with this article.
The simple answer is that there is no objective score of a good novel. What is a good novel? The only gradient descent that gets a good story is usually the one into madness.
What a truckload of crap. Brains can do causal reasoning because neurons can only fire in a single direction? Computers cannot because they can only say A=B? Really?
Computers are perfectly capable of causal reasoning, what else is an if statement? The issue is whether causal relations can be inferred from observations alone, and that is an entirely different issue.
Professors of story "sciences" should study some real science before babbling their delusions like this.
So much this. It is shameful to publish this drivel with the gravitas of a professor. They even have the gall to tell us he published this argument in a peer-reviews (literature) paper!
While I disagree with the authors completely, comparing GPT-3 to a working plane is also wrong.
GPT-3 can produce human-sounding pieces of text that don't have any meaning. I am quite certain that it will prove a dead end in the advancement of NLP, simply because trying to learn human writing by matching lots of text is unlikely to create an accurate model of the world that you could use to produce meaningful communication.
Well, the first flight at kitty-hawk was an unimpressive 12 seconds, so it still took a quite a while for people to actually start admitting that practical powered flight was possible or useful.
Analogously, GPT-3 is clearly far from perfect. But it does seem to be sufficiently useful for people to use it as a creative writing tool already today.
(Tantalizingly, there's some signs in people's playing with it that seem to suggest the beginnings of a there there. However, like a lot of people, I've been disappointed too often before to be sure; not without a lot more research. So I won't push that point today. )
What about average novels? Some genres are really formulaic and even repeat the same plot structure from book to book with quite minor alterations. I don't see how these couldn't be done with AI.
Certain kinds of novels can perhaps be written well by computers - but I believe the classical conception of a novel - one which has both internal narrator state and external state cannot be conceived of being written well with current methods.
Note I used well written as an indicator of goodness, obviously internal and external states can be represented but will they make anything approaching sense and aesthetic pleasure, ever, given current strategies for computerized writing of a novels? I don't believe so.
The current strategies for any content creation via computer are, as I understand them, always based on having a large enough corpus to be able to determine what should be done.
The corpus for good novels is of course smaller than the corpus for novels.
The corpus for internal state of narrators is smaller than the internal state of good novels, the corpus for internal state of narrators who represent characters of the current generation will not really be very big until that current generation is no longer current.
Things that are probably achievable - a unified style - but only in the way of a pastiche, so rewriting HuckelBerry Finn in the style of Jane Austen seems like it might end up a doable thing in less than a decade? A unified style that is unique seems unlikely within a decade, a unified style that represents whatever are the young, happening writers of the moment seems like it would require a different way for the computer to generate that content. Although 2 decades from now, maybe?
on edit: I seem to go from thinking it will never happen to thinking it might happen in 2 decades, I believe it won't happen, but lots of people have believed stuff will never happen and been proven wrong, so, accepting the requirement that we use the methods for computer generation of content currently available - if those methods have not generated good novels in two decades (by throwing more computing power at bigger corpuses) then I feel confident in concluding my belief that they never will is correct.
Genre fiction usually follow a select set of tropes from a larger set, therefore I believe it is possible for the ai to write mediocre genre fiction in the near future.
Literary fiction and the higher forms of genre fiction on the other hand almost always have layered meanings, nods ot other texts, and events in real life which makes me believe if we can create a computer that can reach that level of intelligence, it can do everything a human can as well. Then we are obsolete.
Well, I've assumed Neal Stephenson has been releasing books written by a machine trained on wikipedia and fan fiction for years now. They're not good but people still buy them.
So is go and people told us "computers will never be good at this, cause the brute forcing used for chess won't work." -- AlphaGo showed that to be wrong.
> But it is to say that there’s a dimension—the narrative dimension of time—that exists beyond the ALU’s mathematical present. And our brains, because of the directional arrow of neuronal transmission, can think in that dimension.
This is so utterly laughable on every level. It reeks of semi-doctism, mixing up levels of abstraction and ignoring All of the relevant literature.
It fails to realize that, if the thing it is claiming were true, it would simply revolutionize the entire field of computer science: it would not only show that you can build a computer more powerful than a Turing machine, but that we already know how : just add directionally to the tape!
Of course, the fact that transistors themselves are directional (in 3 directions, no less! So they're actually 1 more powerful than neurons!) is completely beyond the professor here.
Probably shows what happens when you read Gabriel Garcia-Marquez[0] instead of Alan Turing when trying to understand how computers work.
> Angus Fletcher is Professor of Story Science at Ohio State’s Project Narrative and the author of Wonderworks: The 25 Most Powerful Inventions in the History of Literature. His peer-reviewed proof that computers cannot read literature was published in January 2021 in the literary journal, Narrative.
I can only laugh. 'Narrative science' indeed.
[0] just want to note here, I do not mean in any way to attack the value of literature. It is a wonderfully enriching experience and it helps shape the human mind, it helps us understand the world of humans outside our experience, and it is an invaluable tool to that end. But it is absolutely obviously not a tool for logical thought, and I would bet most great authors would laugh at this idea as thoroughly as I do.