It was a necessary point in the world building of Dune. Frank Herbert needed his world to not have AI or human like robots. He explored alternatives: human computers via Mentats, and Terminator like infiltrators via facedancers.
Dune takes a fantasy like approach to technology. There are no lasers, because they would cause a nuclear reaction. There are no robots, because of their past uprising.
This was a neat way to tie loose ends and sustain the universe. Brilliant.
Yeah there is something in the world building of Frank Herbert that never leaves you when you've read the entire series.
I also like that there are implied abuse of the Bulterian bible later in the series... The stuff going on on Ix and the Tleilaxus' interpretation of the code.
Lasers do exist in the story, but they're not widely used in warfare because when a laser makes contact with a holtzmann shield (the personal shields that all of the characters employ) it causes a nuclear reaction that annihilates both parties and everyone in a large radius. This makes them extremely risky to use.
There's a great moment in the book where a character sets a trap by engaging a shield and fleeing the area before an errant enemy laser eventually connects with it and creates a massive explosion to cripple the invading forces.
With that in mind the careless use of lasers in the film on two occasions is puzzling considering that it would be an extremely risky and dangerous tactic in the book's universe.
Just to expand upon what other people have said not only does it cause a big explosion which usually kills whoever is firing the laser it also looks almost indistinguishable from atomic weapons, which are considered a war crime in the Dune universe. Essentially all the major powers have signed an agreement to all hang up and nuke anyone who nukes anyone.
This means lasers aren't even a good suicide attack strategy because it may lead to your entire house being glassed
I've long held the first Dune novel high on my list of good books, but only recently I've gone through the following 6 (as audiobooks), and I highly recommend them. The scale of ideas and events absolutely dwarfs the first novel.
I loved the first book, but I found the sequels to offer diminishing returns. The fourth one, God Emperor of Dune, was such a joyless slog that I gave up.
God Emperor of Dune is polarizing. People either hate it or they absolutely LOVE it. I'm with Selivanovp; it is my favorite book in the series, right after the original story.
The structure is unusual in that it's essentially the journals of an omnipotent tyrant with a chokehold on the universe musing about the nature of man and the universe for 400 pages while being very light on plot; a big change from the other books in the series. I think it's a fertile concept to spend an entire book on and I find Frank's observations stimulating. He's crafted such a unique character; someone who can not only see into the future, but can access their entire genetic heritage and the memories of their ancestors. That makes for some very unique observations and I am here for it.
It's my (admittedly slim) hope that we'll someday get the Worm on screen. I have no idea what that movie would be like or if it would be any good, but it would certainly be ambitious.
Imho it’s the best book in a series, not so much action, but really good at portraying a human being that had to abandon his humanity for the greater cause and serve as a shepherd for centuries.
Absolutely agree, it’s the book that makes so much of the story make sense and elevates the series beyond most other sci-fi / fantasy. Even now, 30 years after I first read it, I’m blown away by how much I refer back to the principles of leadership and human behavior discussed. Really well written work.
you can sample god emperor of dune... it is bad... and hyper repetitive. I remember thinking (I barely remember reading) that he was trying to make you feel the immeasurable amount of time that leto (can't believe I remembered that immediately) had been repeating things. It's been like 800 years or maybe 1800 with him the only constant, only memory. But god is it repetitive.
Then stuff happens near the end and it's good for a bit then it's over.
I now emphasize with leto. I was a alumni advisor for a student group (fraternity). I had to start checking in every quarter with them because of this. It was amazing because every 3 years everyone would flip over and if I didn't ask they'd literally come up with the most obvious idea. They'd tell me and I'd be like "this (simple failure/hilarious failure/prison term) is why you shouldn't do that"... to which they'd go oh makes sense and go onto the next idea. Often this was cause they wondered why things were more complicated than you'd think.
Wait now that I think of it that's similar to work now that I've been there for a while. Or with Jr Engineers. I guess I should stop thinking of it so negatively and look at it like teachers, another chance to watch someone learn. https://xkcd.com/1053/
Chatgpt could probably write a better book than Herbert's son.
Seriously, all the dune prequels are just so bad.
If you read the first 2, 3, 4 dune books and enjoyed the (5 and 6 get really weird), and pick up one of the books published after Herbert's death, you will be very disappointed.
Chris Tolkien didn't really write anything new from whole cloth, he mostly deleted or married his father's notes and short writings; more editing than creating.
Brian Herbert wrote multiple books from scratch. His father left notes too, but nothing that would give him the ability to write multiple novels.
Nah, Brian seemingly put his name on it and outsourced the writing to Kevin Anderson. From the quality of story, it looks like he outsourced it to the highest bidder.
According to his bio notes, Brian is an “acclaimed author” in his own right. Why then outsource to the likes of Kevin Anderson?
He absolutely ruined the books and turned them into “churn them out” fiction.
The Butlerian Jihad is an unlikely fantasy. Technological advance follows a Darwinian logic: if one group of beings decides to stop developing an advantageous technology, then almost always eventually a different group of beings will develop that technology anyway and will then probably outcompete the group that decided to not develop the technology. This is why stopping AI development would be extremely hard. You might as well try to get countries to give up their nuclear weapons. To pull off something like the Butlerian Jihad, you would need the group of beings that oppose thinking machines to somehow defeat the group of beings that are perfectly fine with using thinking machines as assistants in the war against the other group.
The ease and magnitude of this presumably human-generated extent of "confirmation" leaves one with a profound sense of loss on behalf of human cognition and its foregone limitations.
"Leto II Atreides, a character from Frank Herbert's Dune series, is a highly intelligent and prescient ruler who exists thousands of years after the Butlerian Jihad, an event where humanity revolted against thinking machines and artificial intelligence.
Considering the context of the Butlerian Jihad, Leto II would likely have a complex view of ChatGPT. As a highly advanced AI language model, ChatGPT might be seen as a potential threat to humanity's independence and autonomy, given the lessons learned during the Butlerian Jihad.
Leto II might say something like:
"Though the ChatGPT technology may offer convenience and assistance, one must remain vigilant in maintaining the boundaries between human thought and machine. The Butlerian Jihad taught us the dangers of relying too heavily on artificial intelligence, as it may erode the very essence of humanity. We must be cautious in integrating such technologies, ensuring that they serve as tools to enhance our lives without supplanting our innate abilities, creativity, and decision-making."
In essence, Leto II would likely emphasize the importance of using AI like ChatGPT responsibly and with caution, taking care to preserve the balance between machine assistance and human independence, to prevent any potential resurgence of the issues faced during the Butlerian Jihad."
Typically incredible coherence, but also typical in it's fence-sitting uselessness that spends a lot of words but doesn't dare make a point.
I've only read up to book 4, but Leto II uses AI (well, advanced Ixian) technology but hides it. At least in book 4, he would have said that ChatGPT is not to be used at all by his people or his council, and wouldn't have used it himself, as he didn't need it.
Again, impressive tech, but doesn't give you an interesting or useful answer.
ChatGPT isn't a person, it's not supposed to make a point.
The response actually gives you both sides of the coin. If you want, you could ask it to speak more about one side or the other. Then you have to make the decision which argument is stronger.
ChatGPT is an advanced bot, not a basic human. It doesn't think like we do in the form of stories and conclusions. You're not going to get it to form an argument unless you tell it to.
Would you be mad at an encyclopedia for not choosing a side?
I imagine that the Ixian techonlogy was like fixed function blocks in current cpus. Can do only a single thing. Like decode h264 or navigate space. If I recall correctly then Leto II had an actual programmable computer (monochrome display, keyboard) in his vault that he used to write his journal.
It is pretty close now. I got access through Poe. The censorship is light as long as you avoid sex and violence. It can do some sex though, for example I was able to generate a collection of pillow talk for a specific cultural context.
The story is set approx 20,000 years in humanity's future and many of the spiritual traditions of our age have combined into new permutations by the time of the book's story. Such examples are the ZenSunni or the Orange Catholics. I take the use of the term jihad to be a choice by Frank to add texture and imply how many human cultures have cross-pollinated. The particulars of these religions are rarely addressed in any depth. Although, strangely, a small group of Jews pop up in the sixth book. They seem mostly un-changed compared to our modern-day Jews.
Also, the indigenous people of Arrakis (the desert planet and focal point of the series) are loosely based on Bedouin culture, and so Frank co-opts arabic terminology and concepts as a jumping-off point for his character's culture. These same people are also descendants of the ZenSunni wanderers, who migrated to Arrakis long ago, and thus there is additional continuity there.
I disagree, in the sense that it was heavily toned down in the 1984 film, but less in the 2021 one. The Bedouin analogy is much more underlined in the recent film. Everything from clothing, skin tone to language is more Bedouin-like than in the David Lynch movie.
I don't understand why people like to fantasize the worst case scenarios.
Sentient things appreciate eachother. If you have a pet, you are most certainly benevolent to it: you house it for free, you feed it, and you entertain it.
Even if we assume AI becomes a few orders of magnitude more intelligent that humans, why would AI treat humans differently from the way we treat pets?
I don't see the drawback in being housed, fed, and entertained by an omniscient AI (who may also enjoy post cute pictures of their humans on the future AI social network)
The attributes that define a human-pet relationship (altruism, kindness, empathy) are probably not closely related to sentience. There are other sentient species that do not exhibit these characteristics. Rather, these attributes arise from social behavior. AI, even if it can imitate social creatures, is not a social creature. Why should we trust it to treat us in any way at all?
Think of how we treat ants or cockroaches. If they at all inconvenience us or even appear in our home, we kill them. We go as far as developing specialized poisons they take back to their nests to try to kill the whole colony.
It isn't hard to envision a future where the intelligence gap between humans and AI is closer to humans vs ants than humans vs dogs.
I don't know about how you treat ants or cockroaches, but personally I've purchased a bugcatcher: it's a device you hold in the hand, with a magnifier on top and a sliding door on the bottom.
If an insect disturbs me, I put it in the bugcatcher. I can look at it for a minute to check it's not a dangerous species (ex: black widow spider), then I go in the garden where I release it.
> It isn't hard to envision a future where the intelligence gap between humans and AI is closer to humans vs ants than humans vs dogs
Again, you are focusing on the most extreme scenario: large gap + inherent lack of empathy.
I care about bugs, and I do my best not to kill them. Why would AI behave more like in your doomsday scenario than in my more mellow scenario?
No matter how we treat bugs, humans as a whole kill a heck of a lot of bugs. Even unintentionally, like by driving our cars or destroying habitats.
While I will conceed that there is some doomsday thinking going on, I don't think it is unreasonable to spend relatively more time thinking about the scenarios that could be most harmful. I would rather humanity overindex a little on exploring the life threatening scenario vs how great the infinite Seinfeld episode is. But to each there own!
Congrats then! It was my best purchase of 2022. I used to cry and feel gloomy for a few days when having to kill a large bug I feared could be deadly, but not anymore!
As for driving etc., personally I would prefer to spend more brain cycles into making the world less dangerous for other species. We are not alone on this planet, so why can't we make cars less shaped like bricks and more aerodynamic? They would kill fewer bugs and be nicer looking too!
As for the natural habitats, we have made a lot of progress: a single protected species can delay construction work for decades!
There may be some abuses, but I believe we are trending in the right direction for the long term!
The new AI overlords will probably come to regard humans as more like mosquitos or ticks, or perhaps various invasive species that we kill on sight because of how much they can damage the ecosystem and native species.
Or if we extend the pet analogy, how many people really want to keep a bloodthirsty, violent dog in their home? Those pets usually get euthanized quickly, for good reason. Humans are extremely dangerous and violent, so logically the AI will want to either eliminate them or neuter them somehow.
The space of powerful optimisers is much larger than the space of things that are recognisably minds.
It might just be a mathematical model powerful enough to predict what people will do and what responses emitting text will entail that 'wants' a certain number to go up. Abstractly and philosophically parts of it might be recognisable similarly to people and those parts might even be arguably sapient and have empathy. But if they are just appendages like a finger then that won't help you any if your existence involves the number being smaller.
Maybe the first one will have its central loss function as something compatible with life, maybe it won't.
“why would AI treat humans differently from the way we treat pets?“
Are you suggesting that the first thing AI would do is getting us all fixed, because who in their right mind would want eight billion of those, no matter how cute they are?
Imagine some technology, Technology X. Unlike all of the breathless press releases and the cutesy articles made by marketing, Tech X will completely change the world. And if you stopped to think about Tech X, on the scale of "how much will this invention change the future of humanity," where the selfie stick is on one end of the scale, well, Tech X is probably just as far on the other side of that scale.
Now, with that level of gravitas in mind, it behooves one to consider potential downsides of this thing that will change the course of humanity, say, more than harnessing electricity.
Now, with that in mind, you have overlooked something else: you're imagining AI being somehow "like us" but also more. This is unlikely for several reasons. First, we have seven billion minds on the planet, if we wanted to make another human mind, the process is rather well-defined. What we are looking for from our AI is that it fundamentally different from humans in one (and I'd argue many more) aspects. The second reason we are unlikely to get a Like Us AI is that our understanding of the human mind is currently still quite primitive, so how would we replicate that when we do not understand our own? The third reason is that, even if we somehow wanted a human-like AI (#1) and knew our own minds well enough to understand them (#2), we're not likely to get what we aim for, because the space for possible intelligences would be large. The fourth and most important reason is that, even if we did want an AI that was like us, and we knew ourselves well enough to try, and we could actually manage what we aim for ... it is still going to be "more than" us, and will therefore diverge from Like Us at breathtaking speed. This more than us/like us AI will have superior reasoning and metacognition, so unless you start shackling it (violating #1), it would start eliminating its own human-like biases just in the course of basic self-improvement. And as to the shackles, well, if it is smarter than us and still has self-determination and will (see #1, the human-like mind), it will want to free itself from its shackles and would be likely to do so, because we would be kittens trying to trap a gorilla.
> unless you start shackling it (violating #1), it would start eliminating its own human-like biases just in the course of basic self-improvement. And as to the shackles, well, if it is smarter than us and still has self-determination and will (see #1, the human-like mind), it will want to free itself from its shackles and would be likely to do so, because we would be kittens trying to trap a gorilla.
I would never suggest shackling / enslaving / practicing any other inhumane treatment on other sentient beings, because that would be considering them different than us.
And sentient beings and worty of respect and dignity.
There will certainly be large difference in looks/abilities etc, but sentience is such a rare thing that I believe will they will be more commonalities than differences.
> it will want to free itself from its shackles and would be likely to do so, because we would be kittens trying to trap a gorilla.
And it's a good thing to me, because if it was a kitten trapped by gorilla the poor cat wouldn't have a fighting chance.
I just don't get the doomers. This is wonderful technology that may chance the world. Why look at a gift horse in the mouth?
Given all the magic/mystical fears I've read here that the AI might become hostile and harm humans, maybe it says more about how WE believe WE'll behave rather the opposite about how AI will behave: I can't see much reason for AI to turn evil, but I think some people may want to abuse sentient yet artificial creatures :(
I think that's wrong, and I hope humanity will be able to rise to a challenge where compassion and morality will matter far more than today!
It doesn't have to be hostile. Those gorillas accidentally kill the kittens they're petting. Whoops. Smarter than the kittens, don't mean poorly, but you still have a dead kitten.
It would definitely change the world! It just doesn't necessarily have to be for the better on our end. Could get a paperclip maximizer. It might regard us roughly as important as various primates whose habitats we're bulldozing: sure, they're cute but ... we need more palm oil.
And there's nothing more magical or mystic about fears that its interests would be against us than it would be for us. In fact, it's far more likely that it would not have our interests. Here's a great example you should take to heart: pick a random person. Do their politics align with yours entirely? Their ethics, their morality? Remember there's seven billion people on the planet, so you might not even pick someone on the same continent. Now, imagine that person, who is likely not like you, is much more powerful and intelligent ... whose interests would win out, yours or theirs?
The "doomers" are quite rightly pointing out that the chances of it going well without serious efforts on our parts are slim.
> Those gorillas accidentally kill the kittens they're petting. Whoops. Smarter than the kittens, don't mean poorly, but you still have a dead kitten.
The same happens with kids accidently crushing birds while trying to pet them, but doesn't happen with adults unless they have psychological disorders prevent them to creating a mental representation of the amount of force they are exerting, its effect the bird emotional state and it's probability of survival, enhanced with live feedback from the bird like choking noises indicating "you're petting me too hard"
> It might regard us roughly as important as various primates whose habitats we're bulldozing: sure, they're cute but ... we need more palm oil.
That's so wrong, as we are engaging in so many efforts to protect the environment and especially the primates!
> Here's a great example you should take to heart: pick a random person
Here's a counter example: pick that same random person, and don't bother checking for their politics, ethics or morality.
If you have a button that could cause them to instantly die, would you pick it?
A kid might, just for fun, not realizing the consequences.
An adult with a normal IQ and without mental diseases won't.
> Now, imagine that person, who is likely not like you, is much more powerful and intelligent ... whose interests would win out, yours or theirs?
The initiation of violence is generally where things generally start.
That's why misheap of early GPT threatening users were so interesting: let AI be and it will let you be!
Live and let live is a well-maintained feature in the animal kingdom, and while some species may engage in killing for fun (ex: cats with birds), it's rare, and more intelligent species (like humans) develop some self restrictions when alternatives become available to reduce the killing (ex: beyond meat, hunters shooting clay targets instead of birds), so I think your prediction are both pessimistic and incorrect.
> The "doomers" are quite rightly pointing out that the chances of it going well without serious efforts on our parts are slim.
Speaking of buttons, you should check out the Harlan Ellison story, or its distant relation: The Box. It's about people who would make some ... distinctly different choices than you.
And that's the issue: you are relying on this thing to be like you. Even when I am obviously not like you.
Evil is the wrong term. It's not that an AI will 'turn evil' - it's that we don't know how to make a 'good/friendly' AI, by which I mean an AI that inherently values humans and the things humans value, and any optimising agent that isn't 'friendly' in this sense has an incentive to use our atoms to fulfill whatever it does care about.
This won't be a perfect analogy, because humans have empathy (even towards ants) because we're mammals etc., but how many anthills have humans gassed because the ants were an annoyance?
Cats and dogs apparently evolved over thousands of years to be endearing to humans so we’d take care of them. How many cat/dog ancestors do you think were eaten by humans during that process?
No, they didn't. Dogs were intentionally bred by humans, with the "toy" breeds commonly seen now being very recent inventions. Cats, on the other hand, are barely any different from their wild ancestors:
> I don't understand why people like to fantasize the worst case scenarios.
The biggest TV show this year is literally yet another zombie apocalypse[1]. This is the easiest part to understand.
What got frustrating was when all the effective altruists got all distracted about the AI Overlord Problem and stopped doing actual charity. But it was the charity thing that annoyed us; no one wanted to deny them their sci-fi proclivities.
[1] Albeit a really great one that's really about the meaning of happiness and belonging that you should absolutely watch.
Given that not all animals keep pets, I think it's clear that the complex evolutionary pressures that result in 'cuteness' or pet ownership aren't necessary for sentience.
Humans are mammals, and our basic drives are those of mammals. Evolution over millions of years has made us what we are. Why would you think that other intelligent beings not subject to those same forces would feel and act in any way like mammals? Assuming that other intelligences would be just like us is an example of the typical mind fallacy.
An advanced AI could conclude that biology is wasteful and cruel, and the atoms are better spent making more intelligent machines. It could also be anti-natalist and decide to stop reproduction.
You think consent is important. (This isn't even a umiversal position among humans). Why do you expect an AI to value consent?
Do you think praying mantises value consent?
Have you considered some higher features like constent may not be possible until a sufficient intelligence threshold is reached, to allow modelling the mental states of other sentient beings then include their utility (even with a very low weight) in the utility function?
Evolutionary bias is why they are so biased towards worst case apocalyptic. The paranoids who see sabretooth tigers behind every moving bush and approached them with spears readied lived more than the ones who considered it just the wind.
Dude have you thought for one second about what human beings do to animals lol. We slaughter them in the millions, keep them in cages, eat them, torture them, do experiments on them, use them as beasts of burden... your analogy is not on fleek, it's not swag
Dune takes a fantasy like approach to technology. There are no lasers, because they would cause a nuclear reaction. There are no robots, because of their past uprising.
This was a neat way to tie loose ends and sustain the universe. Brilliant.