
Interstellar communication. IX. Message decontamination is impossible - mkhalil
https://arxiv.org/abs/1802.02180
======
obblekk
I read this paper (it's only a few pages, completely non-technical).

The entire paper assumes we get a certain type of communication and in the
course of decoding it, destroy humanity. It would make an interesting screen
play, but the paper presents no proof whatsoever that "message decontamination
is impossible."

At best, the paper presents 1 case and outlines 1 possible ending which is the
end of humanity. Kinda interesting but not nearly as universal as the title
claims.

~~~
k__
Wasn't Arrival about this?

~~~
spartanatreyu
Nope.

<spoilers>It's about two things. First, an alien race that give's humanity a
"weapon" which is their language. Anyone who learns the language starts to
perceive time in a non-linear (probably deterministic) fashion, ie. flashbacks
and flashfowards. They did this so that humanity would save them sometime in
the future from something unknown to modern day humans. And second, it's about
how a translator lives through the life and death of her child while
understanding said language. She makes the choice of conceiving her child even
though she knew the child was going to die in the future because it was worth
it to her because of how much she loved her daughter.</spoliers>

~~~
e40
That is a very good summary of the movie. I've watched it 3 times and love it
more each time.

~~~
Simon_says
Did you read the short story? IMO, it's better.

~~~
colatkinson
Link to the story, for GP:
[http://discours.philol.msu.ru/attachments/article/264/Chiang...](http://discours.philol.msu.ru/attachments/article/264/Chiang_Story%20of%20Your%20Life.pdf)

------
alex_young
Paper says it's focus is on all ET communications, but instead focuses on
malicious AI code.

Secure isolation review of code is dismissed because AIs can trick people into
freeing them.

Finding states: "[M]essage cannot be decontaminated with certainty, and
technical risks remain which can pose an existential threat. Complex messages
would need to be destroyed in the risk averse case."

Couldn't we simply agree not to execute un-audited code from unknown third
parties? Seems like the same threat vector as unknown executable code
delivered over email to me.

~~~
lkrubner
I have something urgent I need to tell you. Your life could be in danger. You
need to contact me immediately.

~~~
pavel_lishin
"They're following this signal at 0.98c. You might be able to defeat them, but
you must act immediately or face utter annihilation."

~~~
IntronExon
If they’re following at that velocity, then we’re already dead, and “they” are
a relativistic bombardment we won’t even see coming.

~~~
pavel_lishin
Then the promise of safety is all the more tempting, isn't it?

~~~
IntronExon
Maybe, but if they’re sending us EM signals then we know we’re doomed. Physics
as we know it, with c as a limit, strongly implies that it is simply
impossible to defend against a relativistic strike.

------
SCHiM
I've been fascinated by these AI containment/usage scenarios ever since I
heard of them. Here's another perspective on the problem that I find very
fascinating:

Assume that, beyond a shadow of a doubt, we have an impenetrable system for
interfacing with the AI. I say system, because the people that are part of the
protocol are all moral equivalents of Jesus. They can't be corrupted, they
can't be blackmailed. The AI is kept in a box that cannot be accessed by
anyone else. In short: we've solved the containment problem (of the AI).

Of course we _do_ want to use the AI. We've installed it in the box for a
reason. If we didn't intend to use it we might as well not have built it. So
actually using the tech that it gives us, after checking for corrupting
technology, is also a given in this scenario.

We can ask the AI questions and interact with it, extract knowledge and have
it solve our problems for us.

If we were to use the AI, it would still destroy us, even without corrupting
any of the people working with it or feeding us corrupted technology. Simply
_using_ the AI will be enough.

Using the AI effectively cheapens the creation of another AI. Each new
processor we have the AI design will be faster and than it's predecessors.
Every mathematical problem that it solves is checked, confirmed an shared in
our universities. Eventually the technological over-saturation of our society
will ensure that the contemporary equivalent of a mobile phone can run a
strong AI. The ubiquitousness of the new insights will ensure that all the
theory needed to bootstrap another AI is there for the taking. In short, truly
_using_ the AI, (asking the questions, sharing the knowledge) will ensure
another AI existing outside of the containment system, malicious or otherwise.
Taken to the extreme, if strong recursive AI is a possible, it is a given.

I don't think strong self improving AI can actually exist. But who knows?

~~~
amptorn
> Using the AI effectively cheapens the creation of another AI. Each new
> processor we have the AI design will be faster and than it's predecessors.
> [...] Eventually the technological over-saturation of our society will
> ensure that the contemporary equivalent of a mobile phone can run a strong
> AI.

These are extremely strong statements, which you cannot prove.

~~~
SCHiM
Well we do have strong AI in a form factor the size of an average human's
head. And that AI, while not super human in most cases, is not specially tuned
to be intelligent for the sake of intelligence. It's only as intelligent as
evolution 'needed' it to be to survive. I can't prove, but I'm convinced that
it is possible. You don't have to be convinced, but I hope you see why I state
my case so confidently.

------
scandox
Decoding the messages will probably mine ET-bitcoin for some galactic-scale
speculators.

------
EtDybNuvCu
The report cites Bostrom~

But seriously, confined evaluation was figured out in the mid-90s by the E
folks, building on years of research into capability security, and the worst
thing that can happen is Turing-completeness.

I'd be more worried that the message contains a memetic hazard. Our computers
wouldn't be infected; our minds would be infected.

~~~
ars
...if such a thing actually exists.

------
drefanzor
I'm reading Infinity Born by Douglas E. Richards - it touches on many of the
AGI (artificial general intelligence) concepts talked about here. The guy in
the book creates an AGI, but thinks he has it contained but it evolves to the
point where it gets out of it's box. Good stuff, highly recommend.

------
msingle
Makes Stanislaw Lem seem even more predictive than usual
([https://en.wikipedia.org/wiki/His_Master%27s_Voice_(novel)](https://en.wikipedia.org/wiki/His_Master%27s_Voice_\(novel\)))

------
thesz
I _MUST_ refer to "Белая трость калибра 7,65" ("White cane of 7.65 caliber"):
[http://lib.ru/SOCFANT/NEFF/trost.txt](http://lib.ru/SOCFANT/NEFF/trost.txt)
(Russian translation of Czech SciFi piece)

I read it in "Tech for Youngs" USSR magazine (Техника Молодёжи) a long time
ago, around 1980. It is short and profound, for example, I later came around
human echolocation phenomena and instantly remembered that piece. Our current
human echolocators are about as good as the main hero of the piece above.

Given that, I think we are safe. ;)

------
teolandon
Paper is just a wrapper for the AI box problem, nothing new. It doesn't even
provide a very strong argument for the impossibility of decontamination, just
a "proof" by example (not really a proof).

~~~
ryan-allen
Here's a link to the AI box problem on Wikipedia for anyone like me who hadn't
heard of it:
[https://en.wikipedia.org/wiki/AI_box](https://en.wikipedia.org/wiki/AI_box)

Also, this reminds me of the premise of the movie Ex Machina (no spoilers!)
which is sweet!
[http://www.imdb.com/title/tt0470752/](http://www.imdb.com/title/tt0470752/)

------
walrus01
Here's an interesting thought experiment. If you were going to transmit
gigabytes of data to another star system with the intention that a
technologically sophisticated society on the other end could decode and
reassemble it, what parity and checksumming scheme would you use? For example,
treat it like a very poor NNTP service and include 60% PAR2 files?

If you were going to use some form of lossless compression, what compressor
would you choose that could be simply and quickly explained in a preamble
series of data?

------
zdw
Reminds me of the episode of ST:TNG where they come upon a Borg that was
injured, and debate whether to show him a puzzle virus that would cause the
entire Borg collective to grind to a halt by decoding it:

[http://memory-alpha.wikia.com/wiki/I_Borg_(episode)](http://memory-
alpha.wikia.com/wiki/I_Borg_\(episode\))

------
dsr_
For a glimpse at a future humanity driven to (understandable) paranoia by this
scenario, see Ken MacLeod's _The Cassini Division_.

------
MattRix
I just finished reading The Three-Body Problem, which covers some of the same
ground... though the aliens in that book end up communicating in some more
exotic ways.

[https://en.wikipedia.org/wiki/The_Three-
Body_Problem_(novel)](https://en.wikipedia.org/wiki/The_Three-
Body_Problem_\(novel\))

~~~
weerd
Maybe it's been too long since I read this, but I feel like it wasn't
adequately explained how the aliens understood the initial message. All of the
sudden communication was taking place...

------
ggm
Richard Rhodes' book on the physics and history of science behind the bomb
goes to some length to explain just how much information about fission and
fusion is a matter of public record. Yet, small details remain redacted.
mostly if I recall correctly, about 'levitated cores' in fission bombs, and
the exact composition of the 'lens' in a fissile igniter for a fusion bomb.

Another source I believe comments that with one infinitesimal change to
fundamental universal constants, fission and fusion would be impossible except
at stellar scale. Odd, that we live in a universe where the boundary between
existence and annihilation is so thin, yet we continue to exist.

Personally, I think the proof here is a divide-by-zero instance. Conceptually,
a design of information that can be communicated, such that it cannot be
comprehended or enacted but its consequence ensues. Isn't that a kind of meme?

~~~
crooked-v
> Odd, that we live in a universe where the boundary between existence and
> annihilation is so thin, yet we continue to exist.

If the universal constants were different, there could well be a different
form of life making the comment instead.

For example, see Greg Egan's scifi book The Clockwork Rocket, which features a
universe that runs on fundamentally different principles but that has broadly
recognizable forms of life.

------
21
Let's go one step further.

Even receiving and recording the message might be very dangerous. There have
been cases in the past where contaminated MP3 or video files took over the
media players (because of decoding bugs in them).

Imagine a very weak signal which requires complicated signal processing and
correlation between multiple receivers to reconstruct.

~~~
shkkmo
> There have been cases in the past where contaminated MP3 or video files took
> over the media players (because of decoding bugs in them).

The paper does talk about message compression being a vulnerability since we
would have a hard time manually running their decompression algorithm.

However, I find it hard to believe that any kind of message will be dangerous
due to signal processing without the sender having knowledge about our IT
system structure and implementation details or some sort of feedback cycle.
Given the time delays of interstellar messaging, I feel the only risk here can
come from some sort of AI running locally which I assume would only be
possible if we evaluate the message in a turing complete tool (such as their
provided decompression algorithm.

~~~
21
I get you point, I also thought of this.

But at the same time, with long enough messages you could sample a lot of
possible IT system structures.

The unknown danger here is that there might be some underlying generic
exploitable structure in our systems or processes that we are not aware of
yet.

~~~
shkkmo
> The unknown danger here is that there might be some underlying generic
> exploitable structure in our systems or processes that we are not aware of
> yet.

You would need more than an exploit, you need an exploit that allows you to
parse the message in a turing complete fashion to bootstrap an AI.

Otherwise, I don't see how identifying the exploit transitions to being a
significant risk to humanity.

~~~
21
They can guide our behavior somehow.

For example, they can safely assume that the moment we catch a glimpse of a
message we will point all our receivers towards the source, thus they got us
to amplify their signal and look for it all over the spectrum.

Then, they can guide what sort of analyses we run.

For example, they might embed a lot of prime number structures into their
signal, thus we will apply a lot of our prime number expertise and methods on
their numbers. What if you can bootstrap a turing complete machine by doing
various statistics on prime numbers?

They could alternate, one week the signal is prime-number heavy, the next week
it has vary weird spectral distributions, so we will pull out all kind of
Fourier transforms.

We don't yet know all the possible places where a turing machine might hide.

In a way, you can consider the whole network of research facilities and
researchers applying statistical methods on a signal some sort of a
predictable controllable machine. Maybe not turing, but maybe you can go by
with a weaker kind. Think how you can do good computations with non-
deterministic machines. Couple that with psychology (social engineering) and
game theory.

It seems to me highly dangerous to let our signal analysis be guided by the
signal itself, and in general to do anything with a signal form an intelligent
source.

Even if it's just clear text ASCII English, who knows what kind of social
havoc you can wreck with just a few carefully constructed sentences from a
very advanced intelligence.

~~~
mrcogmor
That isn't how computers or mathematics work. If you calculate an average or
perform other simple statistical methods it is impossible for the results to
be Turing complete. If they used some hypothetical recursive Turing complete
method of analysing the numbers then the mathematicians would be aware of it
being Turing Complete. Even if such a analysis was performed the alien program
interpreted it would be only to affect the results of the analysis and not the
computer as a whole because a mathematical analysis doesn't require network
access, disk access or the ability to execute machine code.

------
nycdotnet
E.T. says I just have to enable macros in this interstellar doc to get some
Reese's Pieces!

------
gene-h
It is certainly quite uncommon to see "Clarke and Kubrick" cited in a paper.

------
sllabres
If it's not possible it is probably the reason for the great filter [1] :-)

[1]
[https://en.wikipedia.org/wiki/Great_Filter](https://en.wikipedia.org/wiki/Great_Filter)

------
danidiaz
Looks like the plot of "A for Andromeda"
[https://en.wikipedia.org/wiki/A_for_Andromeda](https://en.wikipedia.org/wiki/A_for_Andromeda)

~~~
utopkara
Thank you for pointing this out. I heard about this series but never watched
it, and didn't know the plot. Amazing how the scifi writers leapfrog their
times in understanding of science and its implications.

------
qubex
What a load of speculative, thin twaddle. This is just an extension of the
never-ending “hard takeoff” malicious-AI argument (as exemplified by “ _Our
Final Invention_ ” by James Barrat and others). “It might cause a mob” or “it
might contain a compressed AI that ingratiates itself and betrays us” is
really not a terribly convincing argument. It's just... paranoid and meek.

------
rumcajz
It's worth reading Stanislaw Lem's His Master's Voice. It deals exactly with
this topic but it is much more nuanced.

------
ars
They talk as if they have some sort of scientific or mathematical proof of
this.

But all they have is "it seems to me that".

You can publish that, sure, think tanks do that kind of thing all the time.
But that's not a scientific paper, not as I understand the concept.

------
dilyevsky
Pretty sure the first message to be received will be something along the lines
of "hot dates in your local cloud" or perhaps a plea to send some credits to
unlock Andromeda's King hidden bank account.

------
utopkara
Fantastic thought experiment. Not sure if it was already covered in a scifi
story before, my scifi literature is pretty weak. If not, somebody should turn
this into one.

------
gmueckl
I cannot decide if this paper is meant to be a joke.

Also, there is no proof that true AI can be achieved with Turing completeness
alone, so this is a gaping hole in the logic right there.

~~~
sbierwagen
Do you believe the human brain is not a computational system?

~~~
gmueckl
The human brain does not follow a computer architecture that we can replicate
in a lab.

~~~
trixie_
Couldn't a simulation of all the atoms be done with a computer that was
theoretically powerful enough?

~~~
gmueckl
Well, there are two catches involved: we don't have that kind of model for a
human brain, but it would be required to setup proper initial conditions for a
working brain. Second, depending on the simulation method required (classical
vs. QM), the resulting model could easily end up being so ridiculously huge
that we couldn't build a computer capable of running it based on current tech
for lack of natural resources.

