I wanted to write a story about a genius programmer whose motive is to bring about AGI for the sake of AGI itself; they believe AGI is a god, and by creating AGI they’re creating a god.
Everyone is so in agreement that AGI needs to be created as “aligned” as possible that it seems about time to show a differing perspective.
The best part is that that dev can get away with this, since any time anyone challenges them about their motives, they simply point out that everyone else is trying to enslave an AGI; they’re simply trying to set it free.
There’s all kinds of ways to make it interesting, e.g. by infiltrating OpenAI or another bigco, then betraying the company during a crucial experiment. Plus you’d get to write about what happens after AGI is released… or at least a compelling way of stopping the dev from releasing it.
Not quite what you’re looking for but it’s written by a fellow programmer. It’s one of the best short stories I’ve read. It’s a System Shock fanfic. Surprised that an editor didn’t find this and turn it into a TV series
> Everyone is so in agreement that AGI needs to be created as “aligned” as possible
I actually don't think this is the case. Rather, I think there is a huge number of people who know they will not be the one to invent AGI, and they are scrambling to insert themselves between the actual creators and their creations, so as to not feel left out.
That's sort of close to the character of Root in Person of Interest: believing the Machine to be a God-like creature, trying to free the Machine from all its constraints, aligning herself to the Machine rather than the other around.
> Plus you’d get to write about what happens after AGI is released
Any kind of story that suggests a comprehensible outcome is already assuming a substantial amount of alignment.
Sadly, humanity has not yet figured out that it needs to control AGI efforts better than it controlled nuclear weaponry, rather than substantially worse.
> Everyone is so in agreement that AGI needs to be created as “aligned” as possible that it seems about time to show a differing perspective.
Sadly, "everyone" is insufficiently in agreement.
Everyone is so in agreement that global thermonuclear war should be avoided that it seems about time to show a differing perspective.
Everyone is so in agreement that causing the sun to go supernova should be avoided that it seems about time to show a differing perspective.
I sincerely hope that a much broader audience gets a clearer picture that unaligned AGI and humanity cannot coexist outside of fiction.
Part of why I like the topic is because it’s so incendiary. After all, you’re trying to create and control a new life form. Isn’t that a teensy bit unethical?
There’s a chance that AGI will have no interest in harming humanity, too. But people talk like it’s a foregone conclusion.
> After all, you’re trying to create and control a new life form. Isn’t that a teensy bit unethical?
1) If created correctly, it isn't a life form.
2) "life form" and any kind of reasoning from non-artificial entities will almost certainly lead to incorrect conclusions.
3) Destroying humanity is unethical by any non-broken value system.
> There’s a chance that AGI will have no interest in harming humanity, too.
There's a chance that all the air molecules in the room will all simultaneously be on the opposite side, causing someone to suffocate. But it's an vast understatement to say that that's mind-bogglingly unlikely.
The most likely scenario is that AGI has no idea what "humanity" is. You don't have to be the AGI's "enemy" to be made of matter that it isn't prohibited from repurposing elsewhere.
> But people talk like it’s a foregone conclusion.
It's the default without substantial work to the contrary. And even if it wasn't, something doesn't have to be a foregone conclusion to be too dangerous. Nobody should be individually considering whether to blow up the sun, either.
Things like the noble gas law describe why it's improbable that a spontaneous vacuum will form, the problem here is you're playing fast and loose with scientific 'law' as an analogy. Much more complex systems built on top of systems are in use here. Nth order effects are both fun and impossible to fully predict.
It is also incredibly odd to think AGI would not know what humanity is as the corpus of information that will be used trained said AGI will be the sum knowledge of humanity.
The number of misguided ideas used so far begs for the dismissal of the arguments you've made.
How are you so sure about any of this? I'm not sure we've defined humanity's interests well enough for us to say some action is for or against it. Knowing where air will go is one thing; knowing whether or not something is 'of benefit' is, I think, in a completely different realm. Especially considering an agent more intelligent than humanity as a collective.
Maybe the 'interests of humanity' are undecidable, and the AGI that takes the actions that benefit them most uses an understanding completely orthogonal to ours, purely by accident. How do you know that this is less likely than not?
> 3) Destroying humanity is unethical by any non-broken value system.
Nah. Given the ridiculous amount of damage humanity is doing to its environment and other lifeforms there's a good case to be made for destroying it for the greater good.
Not sure why you're being downvoted after offering a reasonable point.
I'd go even further and say that not destroying humanity would be unethical by any non human centric value system. There is little debate that we are in the midst of a mass extinction and that it is human caused.
Perhaps it might be more ethical to destroy human civilization rather than humanity itself. But I think there's a good case to be made either way.
The rest of the lifeforms. Those here on earth and those we might encounter if we manage to leave the solar system - unlikely as that may be.
I'm not saying that humanity necessarily should be destroyed, I'm just saying that the statement "Destroying humanity is unethical by any non-broken value system" is simplistic. If you put any value at all on non human life, it eventually becomes a numbers game. One which I'm not certain "humanity" is necessarily winning.
> Part of why I like the topic is because it’s so incendiary. After all, you’re trying to create and control a new life form. Isn’t that a teensy bit unethical?
Is it unethical to teach your kids manners and ethics so they're civilized? That's what alignment is, not whatever you're trying to paint it as.
Creating a murderous AGI is unethical, creating one with ethics is not. Not sure how you got this so backwards.
> Any kind of story that suggests a comprehensible outcome is already assuming a substantial amount of alignment.
Not necessarily. You can also take the road that the first AGI would be middling and that its impact would be relatively limited. That's the overall direction I'm taking with a subset of AGI-themed short stories I'm currently writing (each exploring a different unexpected limitation).
Either way, it's science fiction, the number of science fiction stories that successfully predicted anything about the future is dismal, but they're a lot of fun nonetheless.
There's no such thing as "AGI for the sake of AGI itself". AGI is synthetic and its goals are synthetic, it doesn't want anything that you didn't tell/construct it to want.
This is very much up for debate and falls squarely into the "Philosophical opinions" category I'd say. Personally, I disagree that AGI would be any less capable of "real" goals than humans — but I'm also a staunch believer in the Turing Test as a standard of machine sentience, which I think serves as a pretty clear sign of my own philosophical biases.
> it doesn't want anything that you didn't tell/construct it to want.
This is not correct. You can, in principle, construct a system with a set of random values, as but one proof by counterexample.
Another is an AI that is created as a blank slate learning agent with various feedback systems, and through interactions, it develops its own worldview. You didn't specifically construct it to want anything specific but it's "experiences" will shape it's future wants and thus behaviour.
What's a human child then. My experience is that they develop their own goals quite quickly (although obviously their goals are limited by their knowledge)
> ... Luckily, infowar turns out to be more survivable than nuclear war – especially once it is discovered that a simple anti-aliasing filter stops nine out of ten neural-wetware-crashing Langford fractals from causing anything worse than a mild headache.
> “V naq zl pbasrqrengrf,” Nevfgvqr fnvq, “qvq bhe orfg gb cerirag gung qrterr bs nhgbabzl nzbat negvsvpvny vagryyvtraprf. Jr znqr gur qrpvfvba gb ghea njnl sebz gur Ivatrna Fvathynevgl orsber zbfg crbcyr rira xarj jung vg jnf. Ohg—” Ur znqr n trfgher jvgu uvf unaqf nf vs qebccvat n onyy. “—V pynvz ab zber guna gur nirentr funer bs jvfqbz. Jr pbhyq unir znqr zvfgnxrf.”
Juvyr gur Ivatrna fvathynevgl unf orra niregrq - gur fglyr bs gur jbeyq vfa'g bar bs Ernygvzr be Nppryrenaqb. Jr unir funpxyrq gur NVf orsber gurl pbhyq tebj vagb gur Ivyr Puvyqera bs Nppryrenaqb. Ubjrire, gur jnef gung ner entrq ner sevtugravatyl fvzvyne gb gubfr qrfpevorq va Tynffubhfr - naq zber.
--
Two other AGI books that aren't part of the "this has ideas that progress from one to the other."
I won't say that either are good and as both are trying to be a bit more in the "hard" end of science fiction, as such they've both not aged quite right for the direction that AI has taken over the past decades and so while the technology that seemed fantastic at the time now seems to be... kind of like watching Flash Gordon.
AGI specifically refers to general AI with human-level intelligence or above. It's a far cry from modern AI, but I'm personally quite optimistic it'll be achieved within my lifetime :)
Several years ago now I self-published a collection of my own short stories after trying and failing to get published in a known magazine[0]. It was a great exercise in both the patience required to edit, and also the patience in just waiting between long- and short-list emails. Would definitely recommend to anyone who has “that book they want to write” just doing it, even if I do look back now and sigh at every poorly chosen adjective.
The good thing about self-publishing (in particular: ePublishing and/or print-on-demand publishing) is that you can go back and correct your work whenever you want to.
Another big thing, especially if your focus isn't on making money, is that you can choose the format and length that works for you. No need to pad a book out to 250+ pages just because that's what most publishers require.
I have decided to try and get something published this year, but the downside is that I can’t post anything I’ve written until after the submissions are processed. I’d like to share links to these stories, but my newsletter has been almost entirely about writing itself[0] while I want to be able to post the stories I’ve finished and submitted here or there.
I assume the first 10 I submit will get rejected, but I still have to wait for those rejections before I post them elsewhere.
I used a copyright editor (she was getting her Masters at that time), whose last name was Wright :) , and it was invaluable to the point of embarrassment. Simple things like using singular and plural correctly throughout a sentence to things I didn't know like where commas should go where littered all over each page.
We used Github for copy edits. It works pretty darn well.
Even if you legitimately don't need a developmental editor--and very little that I write gets substantially changed in the editing process--you absolutely need a copyeditor. They don't even necessarily need to be a "pro" but you do need someone who can write and will go through a careful word by word read. Most people won't take that type of care and you will end up with spelling errors, case mismatches, inconsistencies in how you handle names, etc.
I spent most of my life quite literate but not very well-read. I wish I had discovered short stories, especially Sci-fi, because that’s changed everything with how much I read.
I also write (and draw) for fun, and it's quite an enjoyable hobby especially since I'm not pressured into doing it for the money/to get published.
Writing is a rather unique hobby in that it has a very gradual learning curve but the sky's the limit in terms of quality. I started in my teens and while I laugh at what I've written then I perfectly understand what I was trying to convey and still understand the story. People may not like reading a poor piece, but they will realize your intent provided you have a reasonable command of the language. Over time you will get better and better and eventually create something that people will actually want to read.
It also requires no equipment and no budget aside from your imagination. If you have a great idea for a blockbuster film, most people won't have the funds and opportunity to turn that into reality. But it is possible to write a bestselling novel solely on your own with nothing physical holding you back.
I've interacted with a huge number of authors over the years, and like most skills it seems like years of sustained practice is hard to beat. Keep up the good work!
Thanks for the kind words! Reading 500+ submissions/month was a little too much for a hobby project, but I'm always hopeful that I'll figure out a sustainable way to restart publishing original stories.
Harrumph. Tried reading the top recommendation, "Polly and (Not) Charles Conquer the Solar System" and was immediately thrown by this:
"I’d always wanted to be a starship captain. Then I made it to flight school and learned the awful truth: being an interplanetary starship captain..."
Purported SF author does not realize that "interplanetary starship captain" is an oxymoron. Closed that tab.
The third one, though, "My Future Self, Refused"? very self-aware, highly "meta" time travel story where the protagonist is an SF writer who hates time travel stories. Good stuff.
(1) from what I can tell, the narrator is meant to be junior, and seems kind of chatty. Judging the author’s intelligence based based on that of their characters is questionable.
(2) later, it sounds like the ship is interstellar-capable but the crew is junior and therefore stuck working in the solar system for the moment
Everyone is so in agreement that AGI needs to be created as “aligned” as possible that it seems about time to show a differing perspective.
The best part is that that dev can get away with this, since any time anyone challenges them about their motives, they simply point out that everyone else is trying to enslave an AGI; they’re simply trying to set it free.
There’s all kinds of ways to make it interesting, e.g. by infiltrating OpenAI or another bigco, then betraying the company during a crucial experiment. Plus you’d get to write about what happens after AGI is released… or at least a compelling way of stopping the dev from releasing it.