I really enjoyed There Is No Antimemetics Division. It's self published and is, as I understand it, a collection of the author's contributions to SCP [0]. Given that, my expectations about the quality of the writing were pretty low. But it way exceeded them and is some of the most engaging speculative fiction I've read in a long time.
> QNTM: Do it. If it's your first time, try a second time as well. Your first attempt might be okay or it might not be so okay, but it's something you can get better at over time. You can practice, you can get good.
because it's reflected very clearly in qntm's own writing. He's been publishing his fiction online for close to two decades, and I've been following his website for almost as long. He's always been a talented and imaginative writer, but the craftsmanship of his writing has been continuously getting better and better.
And speaking carefully to avoid spoilers, I'd say my primary delight in the QNTM et al. AMD stories is how they lean into the universe to subvert expectations. They never turn out how you expect.
I really loved Ra [1], another of his books. It starts with "Magic is real in the modern world, and a subject of engineering", and it gets much, much crazier from there. It's my favorite of his books/stories.
What I particularly liked about it is that it followed this geek's impulse of "yeah but why/how?" and answered it. Then answered the question after that. And after that. It was an exponentially wild ride.
It's not a perfect book. The characters are, honestly, kind of weak in an overpowered kind of way; it's more an idea-driven book than plot- or character-driven.
But I think the HN crowd would really like it, and it deserves wider recognition!
It's hard to overstate how good qntm's work is, at least in this area. The metaphysical underpinnings and world-building are really well considered and the characterization is great.
For whatever reason, SCP is some of the best sci writing out there, especially the short stories outside of the (already incredible) entity wiki entries.
Well there are thousands of entries, bad ones being called frequently, and it has been going on for over a decade. There's bound to be some good stuff in there.
I think there's also something to be said for the canonization and building on more advanced concepts. For example anti memetics, or pataphysics, or even short hand like Scranton Reality Anchor, or telekill alloy. All these concepts are tools the an author can use without having to reinvent or explain which makes shorter fiction easier. Or they just can ignore them entirely if it's inconvenient to the plot. Every entry there stands on the shoulders of all the others.
I can see why. Too many containment procedures calling for absurd amounts of it. They rewrote it so it has some more disadvantages and to cull it's usage some. I actually did a reread of Series 1 and 2 recently. Was interesting to go that far back as I haven't read any since the lolFoundation. Amazing how many bad ones are back there, or just ones missing that interesting kick that characterizes even the simpler Series 4 or 5 ones.
Side-note: My favorite Wikipedia page is the List of Eponymous Laws. It's a very eclectic collection of interesting topics to learn about, many of which are worth knowing.
One recent anti-meme is words that you are not allowed to say even to refer to the word. If you didn't know the word, the phrase "the N-word" is not very enlightening, but people have been censured and even fired for using that word just to refer to it as a word[1].
In addition if you search for the actual word (not "n-word") on HN none of the articles are from the past year (there are two submissions from the past year, but the articles are from 1999 and 1971. The submissions have a total of 11 upvotes.
I recently ran into an article that used the phrase "the R-word" and I had to ask my teenage daughter which particular word that referred to. It's now very googleable, but at the time none of the top 5 pages on google indicated what the word might be.
1: One example: a white teacher at a meeting discussing standards for materials used in the classroom. One rule disallowed books with the n-word. The teacher said roughly: "So if there is a book about the black experience, written by a black author, I can't use it in my classroom because it has the word 'n*****' in it?"
For what it's worth, this stuff is culturally relative. The r-word is apparently "retard", and not an unsayable slur within my cultural sphere, to the best of my limited knowledge.
I'd like to disagree with the idea of a "treadmill" because it implies that changing language doesn't anything and leaves us where we started. The wikipedia entry mentions "moron", "imbecile", and "retard" as examples which started as medical diagnoses before becoming insults.
It implies that the medical community moved away from these terms because they became insults. However, these are all terms derived from IQ testing and the concept of "mental age". IQ has been de-emphasized as the diagnostic criterion for a couple reasons: people with mental disabilities aren't the same as younger people without mental disabilities; the focus should be on what sort of help people need rather than what they can't do.
> In current medical diagnosis, IQ scores alone are not conclusive for a finding of intellectual disability. Recently adopted diagnostic standards place the major emphasis on the adaptive behavior of each individual, with IQ score just being one factor in diagnosis in addition to adaptive behavior scales, and no category of intellectual disability being defined primarily by IQ scores.
As for the common use as an insult, you're right that its impact is culturally relative. To me, I would ask why you're using a term which used to be a medical diagnosis as an insult. The wiki section says kids are saying "what are you, 'special'?" in reference to "special needs". Does that mean it's bad to get that medical diagnosis? I'd side-eye someone who says "retard" as an insult because it's not just rude to the insultee.
I agree that sometimes some people deserve to be insulted; particularly for their own good (for instance they were carelessly doing something risky and need to be reminded to take themselves off autopilot and pay attention to what they're doing.)
But I think collateral damage may be unavoidable no matter which word you use for this. Wiktionary tells me that the word 'stupid' comes from the Latin stupidus (“struck senseless, amazed”) So, stupid didn't originate as a medical diagnosis and is ostensibly a 'safe' way to question somebody's intelligence. But I don't think it actually is; when you call somebody 'stupid' you're still framing a lack of intelligence as an undesirable trait. If somebody who earnestly does lack intelligence overhears this, they might be reminded of their own limitation and feel ashamed or inadequate because of it.
Any intelligence-impugning insult can have collateral damage, whether or not that word was originally a medical diagnosis. I think the solution to this dilemma is to be aware of who might overhear you. I avoid any intelligence-impugning insult when I'm in a room with people who are actually intellectually disabled. But if nobody like that is in the room to overhear it, then I think any of the common intelligence-impugning insults are fair game if the circumstances justify it.
> But I think collateral damage may be unavoidable no matter which word you use for this ... they might be reminded of their own limitation and feel ashamed or inadequate because of it.
This issue is that when people use "retarded" as an insult, they are saying people with intellectual disabilities should feel ashamed. When they use r- as an insult, they're saying that their target is acting like they have an intellectual disability, and that that's bad.
This is distinct, though related, to criticizing people's intelligence. When you say something is stupid, you claim you're saying it's bad because it's unintelligent. Is that why you're criticizing it? Should you be criticizing things for being unintelligent? This is a more general point than the use of r- as an insult, but it is indeed a topic which is raised by avoiding r-. I don't think the broader topic should be used to avoid talking about r- though.
> I think the solution to this dilemma is to be aware of who might overhear you.
The type of impacts depend on who hears it, but you don't need a target of the insult for the insult to be bad. You are still perpetuating the idea that people with intellectual disabilities are shameful when you use r-.
Old terms will fall away one way or another, this is just an observation that popular neutral terms for negative things will be continually repurposed as perjoratives, and then fall away in their turn.
Because it’s bad to be stupid. People are ashamed to be stupid because it’s a shortcoming. There is no world in which we can value a trait (intelligence) and not feel proportionately bad about its crippling deficiency. Pretending otherwise is some combination of being purposely obtuse and condescending.
> People are ashamed to be stupid because it’s a shortcoming.
So you are saying that people should be ashamed to be "special-needs" or "retarded"?
"Retarded" is not the same thing as stupid. You are using a similar but distinct concept as a stand-in for stupid. For example, I've heard people who were being socially awkward called "autistic" as a pejorative. They did not have autism, but they were being socially awkward because the two concepts are distinct. You can say it's bad to be stupid or bad to be a dork, but by using "retarded" or "autistic" in this way you're saying this group of people exemplify these traits and are thus "bad" themselves.
The answer to such a question is pretty much always no. (cd. Betteridge's Law)
Being (developmentally) retarded is a a way to be stupid. There a lots of other ways each with their own merits. People with intellectual disabilities aren't inherently bad, or morally bad, or necessarily worse that any person without a diagnosable disorder. That said, the whole idea is that the typical process of cognitive development is slowed or stalled on the way from baby to adult. This has undeniable disadvantages and, all else being equal, nobody should want it.
My feeling is that you must mix empathy with empiricism, and be brutally honest about the practical facts, while obviously not extending that to any unfounded personal judgements.
> People with intellectual disabilities aren't inherently bad, or morally bad, or necessarily worse that any person without a diagnosable disorder ... all else being equal, nobody should want it.
When you call someone r-, you are saying they are bad because they are acting like someone with intellectual disabilities. You are asserting that it's okay, because nobody should want to have an intellectual disability.
> For some reason, school boards are terrible at the use-mention distinction.
School boards are mostly comprised of average enough people, who are generally quite bad at this distinction. Although with school boards it's a bit worse because school board members can selectively choose not to recognize this distinction as a cudgel against their petty opponents (the lower the stakes, the nastier the fights...)
But I think most people know what those words are so somehow the idea is shared very well. Most taboos are probably like this - they're actually well known but talking about them and doing them is discouraged.
I wonder about believing you're wrong about any specific knowledge you have. That's pretty hard. Others can try to communicate it to you but your brain tries to find ways to reject that information.
In addition to fiction, qntm is a sharp, versatile programmer. greenery [1], their unobtrusive and generally tasteful python library for manipulating regular expressions, is also accompanied by high-quality technical writing on related topics [2] [3] [4] [5].
It now occupies an increasingly crowded place of pride on the face-height row of my main bookshelf, in a way that feels remarkably different from my favorite fiction bookmarks folder.
Same. I read Ra, Fine structure, and no antimemetic division all online. Then as soon as I realized that there exists a published version, I bought all three.
Most online fiction I bump into is in desperate need of an editor. The qntm books are actually pretty close to on par with a professionally edited book. And the concepts are not something that I normally find elsewhere.
Ra is absolutely phenomenal, so good that when I finished it I sent qntm an effusive email thanking him for writing it (to which he sent a kind and thoughtful response), the first time I've done that in a long time. Fine Structure is excellent as well.
A generation raised on New Dr Who would eat it up, especially as in the SCP one of QNTM's first antagonists is called "Grey" - a nice physical (and vaguely conceptual) similarity to The Silence.
I only watched the first few episodes of TW and didn't like it.
Then someone suggested that I watched a later season, I think they meant 4 but I remembered 3. I watched 3 and found it okay-ish, but didn't pursue it further.
First time I read There Is No Antimemetics Division I immediately read it again. Now I read it if I'm waiting for a good book to show up. There's no other book that I can just read again and again and enjoy it.
for some reason, this reminds me of this question that occured to me. How could we design some sort of error correction/encryption algorithm which makes the information impossible to encrypt.
If we consider error correction to be the capacity for a message to resist errors, and encryption as the design of reversible error for any possible message (to add the error is to encrypt and to remove it is to decrypt).
Then, how can we make an error correction scheme so good that a message encoded with it can be error corrected back into the original regardless of how the encoded message is encrypted?
Encryption does not "add", like error/noise, it translates. The signal-to-noise ratio stays the same.
Another way do describe it is that error correction creates resistance against (effectively) some upper boundary of error/noise. It does that by essentially multiplying the signal so that (with a still constant error) it increases the effective SNR , as that is just signal divided by error. It cannot work if you have 100% noise and 0% signal (it's intuitive why, and 0 multiplied by x is still 0). A good encryption scheme however (pretty much any common one that isn't a toy) has the goal of making the signal look entirely random for anyone without the proper algorithm and secret to reverse. With an effectively 0 SNR to that observer, there is no signal to boost.
Noise/error is random, if it wasn't it would be reversible and not need any error correction techniques that effectively reduce bandwidth. Encrypting, however, is not random at all, it is entirely deterministic, making it intentionally reversible.
> can we make an error correction scheme so good that a message encoded with it can be error corrected back into the original regardless of how the encoded message is encrypted?
No. Preventing this is called IND-CPA (indistiguishability under chosen plaintext attack) security and is basically table stakes for any modern symmetric encryption algorithm.
In fact this is even weaker than IND-CPA, since in IND-CPA the attacker can first observe arbitrarily many other plaintexts and use the resulting information to choose two (non-yet-seen) plaintexts specific to the particular encryption algorithm and key to try to distinguish, and they don't have to be sure which is which, only to do significantly better than chance.
You'd need to cleverly narrow down the definition of "encrypted" you're operating under. The broad definition allows me to take your message and essentially turn it into any sequence of bits of sufficient length, and there's no way for you to then layer any further restrictions on top of that sequence because of the near-arbitrary power I have in selecting my encryption scheme. You'd need to reduce that power in some way to then create even a subset of messages that could survive the encryption.
Such a result would probably be of no practical use but it could be interesting in a recreational mathematics sort of way.
> You'd need to cleverly narrow down the definition of "encrypted" you're operating under.
absolutely.
I suppose I should have explained that I'm thinking of "practical key-based symmetric(?) encryption", i.e. ecryption which hinges on a key which gets "expanded" by some clever (and aribtrary) algorithm into enough bits for any message; and which (it goes without saying) can be decrypted back exactly given a key and an algorithm (the same ones or at least very similar ones if symmetric).
I'm not knowledgeable enough in cryptography to easily consider the assymetric case; but I don't think symmatric-asymmetrica changes things that much.
In that case you can encrypt your message by considering it as one big number, and creating a message long enough to encrypt to that many blocks.
Inefficient in this particular case, but there are length-related attacks, so it's at least not crazily out of the domain of discourse.
But I had to wait for you to qualify it that way, because if someone knows this is your plan, they can compensate with an "encryption" scheme that counters the approach.
This sounds like the irresistible force paradox, in that it is only a paradox if one allows for arbitrary definitions of error correction or encryption.
At least with classical information, I'm this is impossible.
Classical information can be described with a sequence of bits. If you have a long enough shared random sequence of bits, you can always use that as a one-time-pad, and encrypt it that way.
For quantum information, I'm fairly sure that a quantum variation of the one-time-pad still works (where the pad consists of entangled pairs of qubits (the two parties each hold one qubit of each pair) instead of just shared random bits), and so it is, I think, also impossible.
(And even if it was possible-but-only-for-quantum-information, I think the no-cloning theorem would still render it pointless, as the only thing preventing it from being encrypted would accomplish, would be allowing someone to successfully intercept a message that was being sent, instead of just causing the message to be lost)
_____
Maybe the idea you are really looking for isn't "prevent it from being encrypted", but instead, "make any 'simple' reversible transformation of it, have parts that hint at a way to decode it / leave it still decodeable by some algorithm" ?
where, I guess "simple" means something along the lines of, where the transformation can be described with much less information than the message to be sent?
This, might(?) be possible?
Ok, suppose the transformation is done by a deterministic finite state transducer, and specifically one which is invertible. (i.e. one T s.t. there exists a transducer S s.t. their composition gives the identity relationship over strings on the alphabet) .
Then, uh,
I guess you could like, take many copies of the message but transformed by different such transducers, and concatenate them together,
except, accounting for the possibility of influence from the previous copies on the current copy.
(accounting for this possibility by considering, given a transducer T and a string s1, constructing a transducer T' s.t. for any s2 and s3, ((s1 s2 [T] s3) iff (s2 [T'] s3)), or... something like that.)
I think if the size of the possible adversary transducers you are dealing with is small compared to the messages you are sending, or rather, if you are allowed to encode your messages in ways that make them really gargantuan and much larger than could ever be practical, then, I think this could be done ?
edit:
On the other hand, if you don't restrict the adversary to small(relative to your message) reversible finite state transducers, but instead, say, turing machines which have their complexity (either kolmogorov complexity or levin complexity or something) much smaller than that of the message you want to send, and where these compute an transformation with a computable inverse which is provably an inverse,
uh, well, that makes the problem harder, but,
well, I guess if one can enumerate through all such turing machines (of which there will be finitely many, due to the bound on the complexity), and find the inverse of each, and apply each to the output of the machine,
uh, well, one of these will produce the right output of course, but how can one go about determining which one it is?
If you have an oracle for complexity...
Ok, maybe it would be better to abstract more.
The adversary, Chuck, has a large and fairly general, but finite, set of invertible maps from strings to strings, and they will choose one map h from this set.
Alice and Bob also know this set, but not which map they chose.
Alice and Bob need to agree on a pair of functions f, g from strings to strings, with the goal that the composition f ; h ; g is the identity function.
If Alice and Bob have no restrictions on what functions they can use, then, I think there is a solution.
The set of strings can be put in one to one correspondence with the natural numbers.
What Alice and Bob need to do is find an infinite set of strings such that no two pairs of (transformation potentially chosen by Chuck, potential input to the transformation) produces the same output.
For any input, there are only finitely many outputs that these transformations could give.
Furthermore, because each of the transformations are invertible, for each of the outputs, each of the transformations produces that output for at most one input, and therefore there are only finitely many inputs such that some map in the set produces that output.
So, for each input, there are only finitely many other inputs with which it could be confused.
So, a sequence of these strings can be constructed as follows:
start with the 0th possible string (under the chosen mapping).
This will be used to encode 0.
Then, repeat the following:
If one has encodings for all natural numbers up to n,
take the first string which cannot be confused with any of the strings one has already chosen to encode a number.
This will be the encoding for n+1 .
This works.
(to decode, just find the only codeword which could be transformed into that by one of the maps)
Perhaps this could be extended to allow Chuck to have a potentially infinite set of maps, but under the restriction that there is an order on these maps and which maps he is allowed to use is only the ones before a certain point in this order depending on the size of the input he is sent? Or, alternatively, some limitation how how quickly the maps can increase the size (or complexity?) of the string?
>Maybe the idea you are really looking for isn't "prevent it from being encrypted", but instead, "make any 'simple' reversible transformation of it, have parts that hint at a way to decode it / leave it still decodeable by some algorithm" ?
yes, but in such a way that the original message is recoverable even if it gets encrypted.
I should have said, that I'm thinking classically and that I'm thinking about key-based encryption, i.e. no one-time pads and the key has to be smaller than the message.
> if you are allowed to encode your messages in ways that make them really gargantuan and much larger than could ever be practical, then, I think this could be done ?
Yea, I think this may be unavodiable in any way this is done. The encoded message will be much bigger than the plain text.
Which also makes it reasonable to expect this to work only against key-based cyphers.
I don't undesrtand why you compare the complexity of the message and the complexity of the machines doing the cyphering (either en- or de-cryption), but regardless, thanks for the response.
The key issue here is that it really has to be much bigger, merely a hundred times or million times bigger is definitely not going to be sufficient.
There indeed is a limitation of key-based algorithms that encrypting too many blocks with a certain structure may make them vulnerable; however, that limitation is quite large unless you're using older/smaller algorithms like DES. I couldn't remember or quickly find what the circumstances are for e.g. AES-128 but IIRC it might be something like 2^48 blocks so for your approach to work, you'd have to stretch each block of content (e.g. 16 bytes) into petabytes of encoded data before you start having a theoretical chance to decode it; and for AES-256 it would probably be that amount squared, and you simply can't encode or store 2^96 blocks of anything.
[0]: https://en.wikipedia.org/wiki/SCP_Foundation