Hacker News new | past | comments | ask | show | jobs | submit login
Top science fiction short stories published in August (compellingsciencefiction.com)
180 points by mojoe on Oct 9, 2022 | hide | past | favorite | 54 comments



I wanted to write a story about a genius programmer whose motive is to bring about AGI for the sake of AGI itself; they believe AGI is a god, and by creating AGI they’re creating a god.

Everyone is so in agreement that AGI needs to be created as “aligned” as possible that it seems about time to show a differing perspective.

The best part is that that dev can get away with this, since any time anyone challenges them about their motives, they simply point out that everyone else is trying to enslave an AGI; they’re simply trying to set it free.

There’s all kinds of ways to make it interesting, e.g. by infiltrating OpenAI or another bigco, then betraying the company during a crucial experiment. Plus you’d get to write about what happens after AGI is released… or at least a compelling way of stopping the dev from releasing it.


Not quite what you’re looking for but it’s written by a fellow programmer. It’s one of the best short stories I’ve read. It’s a System Shock fanfic. Surprised that an editor didn’t find this and turn it into a TV series

https://www.smashwords.com/books/view/124443


Thank you very much for linking this. The preface is surprisingly inspirational.


There's an element of this in Dan Simmons' Hyperion Cantos.


I love AGI stories, if you write this please send it over! joe@compellingsciencefiction.com


> Everyone is so in agreement that AGI needs to be created as “aligned” as possible

I actually don't think this is the case. Rather, I think there is a huge number of people who know they will not be the one to invent AGI, and they are scrambling to insert themselves between the actual creators and their creations, so as to not feel left out.


That's sort of close to the character of Root in Person of Interest: believing the Machine to be a God-like creature, trying to free the Machine from all its constraints, aligning herself to the Machine rather than the other around.


> Plus you’d get to write about what happens after AGI is released

Any kind of story that suggests a comprehensible outcome is already assuming a substantial amount of alignment.

Sadly, humanity has not yet figured out that it needs to control AGI efforts better than it controlled nuclear weaponry, rather than substantially worse.

> Everyone is so in agreement that AGI needs to be created as “aligned” as possible that it seems about time to show a differing perspective.

Sadly, "everyone" is insufficiently in agreement.

Everyone is so in agreement that global thermonuclear war should be avoided that it seems about time to show a differing perspective.

Everyone is so in agreement that causing the sun to go supernova should be avoided that it seems about time to show a differing perspective.

I sincerely hope that a much broader audience gets a clearer picture that unaligned AGI and humanity cannot coexist outside of fiction.


Part of why I like the topic is because it’s so incendiary. After all, you’re trying to create and control a new life form. Isn’t that a teensy bit unethical?

There’s a chance that AGI will have no interest in harming humanity, too. But people talk like it’s a foregone conclusion.


> After all, you’re trying to create and control a new life form. Isn’t that a teensy bit unethical?

1) If created correctly, it isn't a life form.

2) "life form" and any kind of reasoning from non-artificial entities will almost certainly lead to incorrect conclusions.

3) Destroying humanity is unethical by any non-broken value system.

> There’s a chance that AGI will have no interest in harming humanity, too.

There's a chance that all the air molecules in the room will all simultaneously be on the opposite side, causing someone to suffocate. But it's an vast understatement to say that that's mind-bogglingly unlikely.

The most likely scenario is that AGI has no idea what "humanity" is. You don't have to be the AGI's "enemy" to be made of matter that it isn't prohibited from repurposing elsewhere.

> But people talk like it’s a foregone conclusion.

It's the default without substantial work to the contrary. And even if it wasn't, something doesn't have to be a foregone conclusion to be too dangerous. Nobody should be individually considering whether to blow up the sun, either.


Things like the noble gas law describe why it's improbable that a spontaneous vacuum will form, the problem here is you're playing fast and loose with scientific 'law' as an analogy. Much more complex systems built on top of systems are in use here. Nth order effects are both fun and impossible to fully predict.

It is also incredibly odd to think AGI would not know what humanity is as the corpus of information that will be used trained said AGI will be the sum knowledge of humanity.

The number of misguided ideas used so far begs for the dismissal of the arguments you've made.


How are you so sure about any of this? I'm not sure we've defined humanity's interests well enough for us to say some action is for or against it. Knowing where air will go is one thing; knowing whether or not something is 'of benefit' is, I think, in a completely different realm. Especially considering an agent more intelligent than humanity as a collective.

Maybe the 'interests of humanity' are undecidable, and the AGI that takes the actions that benefit them most uses an understanding completely orthogonal to ours, purely by accident. How do you know that this is less likely than not?


Why do I have the feeling that I'm reading the rationalizations of a species which about to disappear?


Too much Twitter? Not enough sci-fi?

I suggest Philip K Dick for this condition. For every condition, tbh


> 3) Destroying humanity is unethical by any non-broken value system.

Nah. Given the ridiculous amount of damage humanity is doing to its environment and other lifeforms there's a good case to be made for destroying it for the greater good.


Not sure why you're being downvoted after offering a reasonable point.

I'd go even further and say that not destroying humanity would be unethical by any non human centric value system. There is little debate that we are in the midst of a mass extinction and that it is human caused.

Perhaps it might be more ethical to destroy human civilization rather than humanity itself. But I think there's a good case to be made either way.


Whose greater good?


The rest of the lifeforms. Those here on earth and those we might encounter if we manage to leave the solar system - unlikely as that may be.

I'm not saying that humanity necessarily should be destroyed, I'm just saying that the statement "Destroying humanity is unethical by any non-broken value system" is simplistic. If you put any value at all on non human life, it eventually becomes a numbers game. One which I'm not certain "humanity" is necessarily winning.


You first.


> Part of why I like the topic is because it’s so incendiary. After all, you’re trying to create and control a new life form. Isn’t that a teensy bit unethical?

Is it unethical to teach your kids manners and ethics so they're civilized? That's what alignment is, not whatever you're trying to paint it as.

Creating a murderous AGI is unethical, creating one with ethics is not. Not sure how you got this so backwards.


> Any kind of story that suggests a comprehensible outcome is already assuming a substantial amount of alignment.

Not necessarily. You can also take the road that the first AGI would be middling and that its impact would be relatively limited. That's the overall direction I'm taking with a subset of AGI-themed short stories I'm currently writing (each exploring a different unexpected limitation).

Either way, it's science fiction, the number of science fiction stories that successfully predicted anything about the future is dismal, but they're a lot of fun nonetheless.


There's no such thing as "AGI for the sake of AGI itself". AGI is synthetic and its goals are synthetic, it doesn't want anything that you didn't tell/construct it to want.


This is very much up for debate and falls squarely into the "Philosophical opinions" category I'd say. Personally, I disagree that AGI would be any less capable of "real" goals than humans — but I'm also a staunch believer in the Turing Test as a standard of machine sentience, which I think serves as a pretty clear sign of my own philosophical biases.


By definition, AI without intent or understanding is not AGI.

It's why there's a qualification to the term, because the old term "AI" was hijacked to mean the statistical mimicry we have today.

AGI by example: R. Daneel Olivaw, or the Minds in the Culture novels.


> it doesn't want anything that you didn't tell/construct it to want.

This is not correct. You can, in principle, construct a system with a set of random values, as but one proof by counterexample.

Another is an AI that is created as a blank slate learning agent with various feedback systems, and through interactions, it develops its own worldview. You didn't specifically construct it to want anything specific but it's "experiences" will shape it's future wants and thus behaviour.


What's a human child then. My experience is that they develop their own goals quite quickly (although obviously their goals are limited by their knowledge)


There's a sequence of stories/books that I like on the subject in that the later ones reference the earlier ones.

We start off with BLIT. http://www.infinityplus.co.uk/stories/blit.htm https://www.nature.com/articles/44964 https://www.lightspeedmagazine.com/fiction/different-kinds-o...

That wasn't so much as a story about AGI, just a starting spot.

Then we go to Accelerando. https://www.antipope.org/charlie/blog-static/fiction/acceler...

> ... Luckily, infowar turns out to be more survivable than nuclear war – especially once it is discovered that a simple anti-aliasing filter stops nine out of ten neural-wetware-crashing Langford fractals from causing anything worse than a mild headache.

Accelerando ends up with AGI.

And while its not a sequel to Accelerando... Glasshouse https://www.goodreads.com/book/show/17866.Glasshouse

At this point, spoilers. So the "why's" of each are going to be rot13 ( https://rot13.com )

Tynffubhfr rkcyberf va vgf onpxfgbel fbzr bs gur cbffvovyvgvrf bs jne jvgu ercyvpngvba ninvynoyr - vg qbrfa'g qrny jvgu NTVf nf fhpu. Gurer'f Phevbhf Lryybj, ohg gung'f abg gur fhcre uhzna bs gur Ivyr Bssfcevat, ohg vgf... fbzrguvat. Gur N-tngrf naq G-tngrf naq gur prafbeobgf vasrpgvat gur zvaq bs crbcyr jub tb guebhtu gurz. Naq hygvzngryl, onggyrf orgjrra guvatf gung urneyq va n qnex ntr jvgu fb znal ybffrf.

We're going to take a detour over to some Verner Vinge. We'll start off with True Names and Other Dangers - https://ia801004.us.archive.org/0/items/truenamesvingevernor... as one view, and the other with the Peace War and Marooned in Realtime. https://www.goodreads.com/en/book/show/167844.Across_Realtim...

True names is directly applicable, but Realtime is a view of what the singularity could be. It is also remarkably devoid of AI.

And next, we head over to Implied Spaces https://www.goodreads.com/en/book/show/2059573

> “V naq zl pbasrqrengrf,” Nevfgvqr fnvq, “qvq bhe orfg gb cerirag gung qrterr bs nhgbabzl nzbat negvsvpvny vagryyvtraprf. Jr znqr gur qrpvfvba gb ghea njnl sebz gur Ivatrna Fvathynevgl orsber zbfg crbcyr rira xarj jung vg jnf. Ohg—” Ur znqr n trfgher jvgu uvf unaqf nf vs qebccvat n onyy. “—V pynvz ab zber guna gur nirentr funer bs jvfqbz. Jr pbhyq unir znqr zvfgnxrf.”

Juvyr gur Ivatrna fvathynevgl unf orra niregrq - gur fglyr bs gur jbeyq vfa'g bar bs Ernygvzr be Nppryrenaqb. Jr unir funpxyrq gur NVf orsber gurl pbhyq tebj vagb gur Ivyr Puvyqera bs Nppryrenaqb. Ubjrire, gur jnef gung ner entrq ner sevtugravatyl fvzvyne gb gubfr qrfpevorq va Tynffubhfr - naq zber.

--

Two other AGI books that aren't part of the "this has ideas that progress from one to the other."

The first is The Turing Option by Harry Harrison and Marvin Minsky. https://www.goodreads.com/book/show/1807642.The_Turing_Optio...

The second is When H.A.R.L.I.E. Was One: Release 2.0 https://www.goodreads.com/book/show/939176.When_H_A_R_L_I_E_...

I won't say that either are good and as both are trying to be a bit more in the "hard" end of science fiction, as such they've both not aged quite right for the direction that AI has taken over the past decades and so while the technology that seemed fantastic at the time now seems to be... kind of like watching Flash Gordon.


The most common meaning of AGI, at least in U.S., is Adjusted Gross Income.

But after searching for a while, I suspect that what you are referring to is this:

https://en.wikipedia.org/wiki/Artificial_general_intelligenc...


Thank you. I was pretty sure they meant AI, but apparently missed the memo that we were calling it AGI now.


AGI specifically refers to general AI with human-level intelligence or above. It's a far cry from modern AI, but I'm personally quite optimistic it'll be achieved within my lifetime :)


Several years ago now I self-published a collection of my own short stories after trying and failing to get published in a known magazine[0]. It was a great exercise in both the patience required to edit, and also the patience in just waiting between long- and short-list emails. Would definitely recommend to anyone who has “that book they want to write” just doing it, even if I do look back now and sigh at every poorly chosen adjective.

[0] https://www.amazon.com/dp/B082QT6XW7


The good thing about self-publishing (in particular: ePublishing and/or print-on-demand publishing) is that you can go back and correct your work whenever you want to.


Another big thing, especially if your focus isn't on making money, is that you can choose the format and length that works for you. No need to pad a book out to 250+ pages just because that's what most publishers require.


I have decided to try and get something published this year, but the downside is that I can’t post anything I’ve written until after the submissions are processed. I’d like to share links to these stories, but my newsletter has been almost entirely about writing itself[0] while I want to be able to post the stories I’ve finished and submitted here or there.

I assume the first 10 I submit will get rejected, but I still have to wait for those rejections before I post them elsewhere.

Congrats on self publishing your work!

[0] link in my profile


It's a great path if you have a little bit of an entrepreneurial streak too -- it's a big marketing challenge.


Did you work with an editor or purely self-edit?


90% self-edit, 10% friend with English degree. In hindsight, an editor is absolutely worth it having worked with them on commercial projects.


What are the most important things an editor brings to someone considering self publishing?


Outside perspective.

Market knowledge.

Experience with the structure and requirements of a story, especially a long story.

Contacts.

An indifference to the 'darlings' of your sentences. They'll kill without mercy.

Self-editing is completely doable, and some are very capable, but a professional editor is just that: a professional.


I used a copyright editor (she was getting her Masters at that time), whose last name was Wright :) , and it was invaluable to the point of embarrassment. Simple things like using singular and plural correctly throughout a sentence to things I didn't know like where commas should go where littered all over each page.

We used Github for copy edits. It works pretty darn well.


Even if you legitimately don't need a developmental editor--and very little that I write gets substantially changed in the editing process--you absolutely need a copyeditor. They don't even necessarily need to be a "pro" but you do need someone who can write and will go through a careful word by word read. Most people won't take that type of care and you will end up with spelling errors, case mismatches, inconsistencies in how you handle names, etc.


I spent most of my life quite literate but not very well-read. I wish I had discovered short stories, especially Sci-fi, because that’s changed everything with how much I read.


I can't recommend Ted Chiang's stories enough.


I've gotten really into writing fiction the past three years. I don't know that I'm genetically gifted for it but it's fun.


I also write (and draw) for fun, and it's quite an enjoyable hobby especially since I'm not pressured into doing it for the money/to get published.

Writing is a rather unique hobby in that it has a very gradual learning curve but the sky's the limit in terms of quality. I started in my teens and while I laugh at what I've written then I perfectly understand what I was trying to convey and still understand the story. People may not like reading a poor piece, but they will realize your intent provided you have a reasonable command of the language. Over time you will get better and better and eventually create something that people will actually want to read.

It also requires no equipment and no budget aside from your imagination. If you have a great idea for a blockbuster film, most people won't have the funds and opportunity to turn that into reality. But it is possible to write a bestselling novel solely on your own with nothing physical holding you back.


I've interacted with a huge number of authors over the years, and like most skills it seems like years of sustained practice is hard to beat. Keep up the good work!


So sad that you stopped publishing the compelling science fiction stories! By far my favorite collection of stories!

But great that you blog regularly now!


Thanks for the kind words! Reading 500+ submissions/month was a little too much for a hobby project, but I'm always hopeful that I'll figure out a sustainable way to restart publishing original stories.


https://www.royalroad.com/fiction/57747/simulacrum-heavens-k...

Let me plug my own, though it is not a short story. It is about a kid who is LARPing as an unaligned AI.


Let's continue the thread.

My first try at short sci-fi story. No spoilers, just the link.

https://initsix.dev/immutable/


I have just read the first chapter and you have a new fan.

Beautiful work.


I heard “My Future Self, Refused” on Lightspeed’s podcast. Nice.

I suspect most people commenting here would enjoy listening to Escape Pod.

Edit: and the list reminded me to add Clarkes World to my podcast app.


Harrumph. Tried reading the top recommendation, "Polly and (Not) Charles Conquer the Solar System" and was immediately thrown by this:

"I’d always wanted to be a starship captain. Then I made it to flight school and learned the awful truth: being an interplanetary starship captain..."

Purported SF author does not realize that "interplanetary starship captain" is an oxymoron. Closed that tab.

The third one, though, "My Future Self, Refused"? very self-aware, highly "meta" time travel story where the protagonist is an SF writer who hates time travel stories. Good stuff.


Regarding the “interplanetary starship captain”:

(1) from what I can tell, the narrator is meant to be junior, and seems kind of chatty. Judging the author’s intelligence based based on that of their characters is questionable.

(2) later, it sounds like the ship is interstellar-capable but the crew is junior and therefore stuck working in the solar system for the moment




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: