The last article in the series summarises it very well:
> In TES, true agency is not something afforded to just anyone. It must be sought after, fought for, stolen. Whatever it takes. Sometimes these attempts take the form of assaults on the fourth wall, yet paradoxically, I don’t find that my suspension of disbelief suffers as a result – quite the reverse, in fact. TES is more real, more human, more relevant as a result of these metaphysical clashes of imagination and reality, and metagame clashes of NPC and player, player and game.
(Actually, I shouldn't say the linked part 2 is my favourite. Part 2 goes really well with part 3 which is also my favourite.)
It's interesting how naive we where about chat bots in 2003 with the NPC being aware of NPC appearing shocking.
OTOH the author is some uneducated machine god cultist.
I wonder when this site started allowing such shallow dismissals of other peoples work, but as far as HPMOR is concerned, it has a few interesting chapters about manipulative behaviors and negotiation tactics.
I wouldn’t see it any differently than say, a fable, which tries to teach something to its reader. I admit this literary work is a bit longer for some peoples tastes, for what it’s worth.
Perhaps it's uncharitably deserved, but doesn't he have a reputation for explicitly rejecting learning things he didn't himself "reason from first principles" and thus has some... uniquely interesting views?
I haven't read the fiction story thoroughly but how is this a bad thing, especially for a fiction writer, especially if he comes up with ideas that are unique and interesting.
The name doesn't ring a bell but apparently most of the author's works are on Less Wrong and there's a lot of woo about the "Singularity" (although they seem to have moved on to another term for it because too many people pointed out that it's nonsense pseudoscience). So this is another Rocco's Basilisk weirdo then?
Imagine a future AI which scrapes your past internet history of all the videos you watched, articles written by you, etc to recreate your digital avatar in order to torture / pleasure you depending on your allegiance to such an AI, long after you are dead. That AI will also potentially be aware of the advanced physics, mathematics, evolution, social and cultural conditioning, free will, etc and still decide to torture you. At this point, all of such efforts looks to me like how can I warp the space of stories to suit my narrative. As Benedict Evans said, talking about AI is all about crafting the best metaphors to advance a particular viewpoint.
Overlord? No, you see - this is the _good_ AI that is doing this. The argument is that this threat is supposed to ensure that the good AI gets created as quickly as possible, thereby saving trillions of lives.
No, I don't know why I am supposed to care what happens to a simulation of me in thousands of years.
Much like religious cults, once people get into this it's very complicated to get them out.
> the Machine Intelligence Research Institute [...] exists to make this friendly local god happen before a bad local god happens. Thus, the most important thing in the world is to bring this future AI into existence properly and successfully [...], and therefore you should give all the money you can to the Institute, who used to literally claim eight lives saved per dollar donated.
Oh, so they don't even pretend they're not a cult.
That Good AI would also be aware of the many-worlds interpretation of quantum mechanics.
It gets even more crazy with the fact that that AI will know all elementary particles
are EXACTLY the same. Quantum mechanics tells us that there is no possible way to distinguish a particular electron from another through experiments. (i.e there is no ID attached to any electron).
Given that fact, would the all-knowing AI be torturing a particular configuration of specific particles OR (god forbid) PROCESSES which are substrate independent. Processes reside in the world of platonic forms.
Given the fact that all-knowing AI would know that any recursive functional combination of randomness and deterministic laws / inputs doesn't lead to any meaningful conception of agency or free will and still decided to torture him/here.
Maybe I'm missing something but that sounds like a sci-fi religion. Trying to appease a future potential all-powerful being that happens to be so pointlessly vengeful to put anyone who didn't cooperate enough in hell?
Around 2000 an essay of his was one of the top Google hits for "meaning of life".
It essentialky argued that the point of all life was to produce its successor, and that therefore everyone had a moral imperative to work on AGI and achieving the singularity.
see World of Wolfram for a 10ep series in which a student winds up turning his apartment into group housing when a couple of NPCS (elf and orc) from his MMORPG arrive in our world.
Oh Yudkowsky? I’ve had my fill on modern cult babble from an over privileged, Mensa culture, writer of horrible fan fiction who appeals to certain “intellectuals” with close to zero emotional intelligence and a desire to see everything as a competition to be won.
Also I’m fairly certain he’s never published or been cited for any hard science surrounding the topic of AI. In fact I wouldn’t be surprised if he made LessWrong solely to ignore peer review from the existing system.
tl;dr downvote me if you frequent LessWrong. It’s fine I’m used to it at this point.
The last article in the series summarises it very well:
> In TES, true agency is not something afforded to just anyone. It must be sought after, fought for, stolen. Whatever it takes. Sometimes these attempts take the form of assaults on the fourth wall, yet paradoxically, I don’t find that my suspension of disbelief suffers as a result – quite the reverse, in fact. TES is more real, more human, more relevant as a result of these metaphysical clashes of imagination and reality, and metagame clashes of NPC and player, player and game.
(Actually, I shouldn't say the linked part 2 is my favourite. Part 2 goes really well with part 3 which is also my favourite.)