
How should we talk to AIs? - champillini
http://blog.stephenwolfram.com/2015/11/how-should-we-talk-to-ais/
======
hacker_9
Interesting that the Wolfram Language is very much a LISP dialect, with the
syntax swizzled a bit here and there to be a bit more human readable.

Also his Riffle example is thinking about the problem the wrong way round. It
looks like he tried to turn code into english, when it should be the other way
around. Here is an actual english version:

    
    
      1. Start with a purple square
      2. Clone the square, rotate it by 0.1 radians and scale it to fit the previous square
      3. Also alternate the colour between purple and yellow.
      4. repeat clone process until square is just a dot in the middle of the screen
    

Note how easy it was to understand what I meant, and yet a computer would fail
at so many of the instructions: scale how? how to test if it fits? alternate
whose colour? repeat what steps? 'clone process'? clone same square or newest
created square? what defines a dot? screen? middle?

All these ambiguities we can solve, but a computer cannot hence why our
programming languages are so specific to the point of cognitive overload. Also
none of his later examples show to me that Wolfram is better than English,
unless of course you use wolfram daily (as he 'the creator' likely does/did),
in which case you're brain is already optimized to read it...

This isn't even mentioning that language is only part of how we communicate.
We can often leave so much out because our visual system, body movements,
previous experiences, memory and other senses fill in the rest.

~~~
joesb
> All these ambiguities we can solve

We can solve it by specify what each word means. That's no different than
programming.

Also, your step 4 has bug because "repeating the clone process" does not
include "rotating and scale it to fit" part. It'll just keep cloning the
square all days and never get the square to be the size of a dot.

Also, If you screen is not a square and you did not start in the center of the
screen. Then the square can be as small as a dot, but it won't be "in the
middle of the screen".

~~~
hacker_9
Joesb if anything you are proving my point! As another human you saw large
ambiguities in my instructions but could still figure out 1. the problems (you
knew no one would clone the square infinitely as it wouldn't 'make sense') and
2. the required solutions (i probably meant scale+rotate as well to be
included in the cloning process, otherwise no interesting visual change would
happen).

Tell me what programming language / compiler can do that by just specifying
word meanings? Context, sentence structure, your understanding, etc all play
into it as well.

~~~
eridal
are ambiguities only meaningful to humans?

Seems to me that computers were instructed to (1) refuse to work in front of
ambiguities, or (2) resolve them by choosing one of the options.

Kind of like when `yacc` tells you that some rule is ambiguous, but then it
ends up resolving it by deterministically picking always the same option.

~~~
hacker_9
we don't know how to make computers understand ambiguities, that is the
problem. Yacc always selecting the first option isn't necessarily right nor
how we would solve the problem, we choose the option that makes the most
'sense'.

Of course what is 'common sense'? Something that only comes about when you
have a billion neurons doing your calculations perhaps.

Also if computers had human level understanding we wouldn't need Yacc rules at
all; we can recognize from 100's of different languages just by looking at
them, even partial code, despite all the ambiguity of putting all programming
languages into one Yacc grammar.

------
dkural
Wolfram is the ultimate company failing in product thinking. For decades they
yell "You can do anything with this!", but fail to identify, specifically, one
thing it does better than other products that are designed to specifically do
that thing. Thus for anything it could do, other products spend more attention
and do that specific thing better/cheaper, with a tighter UX and more content.

It turns out most people don't care if their car is also a surface-to-air
missile, or if their dishwasher can also compose poems.

~~~
veidr
But you can only really say they are failing, if you are looking at them
through a short-term lens. Interviews with Stephen Wolfram give the impression
that he is thinking long-term (as in, his entire lifetime and beyond).

Mathematica does actually seem to have dominated its niche for decades. People
did and do all manner of things with it.

Wolfram Alpha, their "computational knowledge engine", has only been live for
6.5 years. It's sort of baffling who it is for (but I do, very occasionally,
find it extremely useful), but it seems what they are trying to do is bigger
than a normal startup that helps you find a taxi or rent out your extra
bedroom or search the news.

They may well fail, but I don't think you can say that they have failed yet.

~~~
dkural
This is what Wolfram says to their engineers who question it. Wolfram /
Mathematica has been around for a while. What's the long-term idea? What
exactly will it do better than some other tool in that category? You can't
reach there if you don't know where you're going. It's worse than failing -
they don't know what they are supposed to be failing at.

------
eykanal
So, the best way to talk to a computer is... a programming language?

Lets ignore the fact that, for some reason, he thinks his particular
programming language is better than all other ones for interacting with an AI.
Am I missing something here? This seems kinda "duh" to me.

~~~
0xdeadbeefbabe
> Maybe this is a case of having a hammer and then seeing everything as a
> nail. But I’m pretty sure there’s more to it. And at the very least,
> thinking through the issue is a way to understand more about AIs and their
> relation to humans.

Looks like he's pretty sure there is more to it.

------
kruhft
I've always wondered where languages like Lojban[1] would fit into
conversational AI. Designed to reduce ambiguity, I would think they would be a
major stepping stone to bridging the gap between conversation and logical
correctness needed by a truly intelligent system.

[1]
[https://en.wikipedia.org/wiki/Lojban](https://en.wikipedia.org/wiki/Lojban)

~~~
sidcypher
I'd say a truly intelligent system doesn't need Lojban, it can learn about the
world and its languages from observations and feedback, like children do. A
pseudointelligent system, however, would probably be much more useful if you
could talk in Lojban to it, with less communication errors and all.

------
winter45
Concerns about consciousness, intelligence,and Stephen Wolfram's use of the
personal pronoun aside, I think it reasonable to presume that communication
with early stage AI systems will desirably include the development of a
vocabulary and syntax structured for 'precision'.

It does seem reasonable that if AIs evolve they will, in their evolution,
develop constructs of 'reality' that Wolfram calls "Post Linguistics Emergent
Concepts"(1) and that, if they do, until human languages develop deeply
"precise" 'words' for each of these PLECs, Spock will have to translate for
Kirk; Spock, talking to an AI, will be (mostly?) unintelligible to Kirk.

This raises the question can one reasonably believe that humans are capable of
more than 'bare bones' precision in words? Can one be both Kirk and Spock?

On that note it is encouraging that Wolfram appears to think our systems may
be (become?) a driving force in our evolution; we may evolve to a point the
use of a precise or imprecise language is a moment-to-moment choice.

It may be that these precise and imprecise languages will develop into a
single language.

Before that happens, if you were such a dual-linguist, which language would
you prefer? Under what conditions would you switch? What would communication
be like in a world where the commonly spoken language was a meld of precision
and imprecision? Which of the two would have the most influence on the other?
Past a certain stage are there generally distinctions in character?

(1) see:
[https://www.youtube.com/watch?v=TMviBl46dXg](https://www.youtube.com/watch?v=TMviBl46dXg)

~~~
laotzu
This idea of Post Linguistic Emergent Concepts is an interesting one which
Marshall McLuhan alluded to in his 1964 book The Gutenberg Galaxy: The Making
of Typographic Man. He suggested that the sheer speed of electric processing
made the alphabet obsolete and that in order to cope with this drastic speed
up in communication we would have to deprecate our slow spoken languages and
adopt an entirely new method of communication:

>Now, in the electric age, the very instantaneous nature of co-existence among
our technological instruments has created a crisis quite new in human history.
Our extended faculties and senses now constitute a single field of experience
which demands that they become collectively conscious. Our technologies, like
our private senses, now demand an interplay and ratio that makes rational co-
existence possible. As long as our technologies were as slow as the wheel or
the alphabet or money, the fact that they were separate, closed systems was
socially and psychically supportable. This is not true now when sight and
sound and movement are simultaneous and global in extent. A ratio of interplay
among these extensions of our human functions is now as necessary collectively
as it has always been for our private and personal rationality in terms of our
private senses or "wits," as they were once called.

------
lovboat
If AIs are not able to communicate we us properly then those are programs but
not real AIs. I didn't find any interesting idea here and the sheer act of
proposing that program as a language for AI is really humorist to say the
least. People are using R and Python to construct a solid foundation for
machine learning, the NLP and the IA will evolve to communicate with us by its
own means, that's the real AI.

------
zamalek
He goes on ad nauseum about how fitting the language is. He's right.

Back in university I'd frequently help seniors out with projects. The one
project involved battleship AI in mathlab. In maybe 3 hours I got it as good
as it was probably going to get pitted against a truly random opponent.
However, PNRGs aren't truly random. It was so quick and so effortless to grab
stats on the biases the PNRG had (in order to optimize the guess selection
process). It was the first time I ever used the language and I can honestly
admit it had very little to do with my capability and so much more to do with
how damn fluent and intuitive that language is.

It's inconsistent, messy and evolutionary but, wow, what an absolute
masterpiece.

~~~
carlob
matlab or mathematica?

~~~
rm445
According to Wikipedia, Mathlab was a computer algebra system (written in
Lisp) from the late 60s, whose author went on to write Macsyma, which inspired
Mathematica.

Like you I'm still not sure whether parent poster meant that, or is confused
about Matlab vs Mathematica.

------
bobm_kite9
What's the difference between Wolfram Language and just any other programming
language?

Also, one of the advantages of natural language is it's imprecision: it makes
communication more robust, and leaves room for interpretation.. in much the
same way as imprecise duplication of the genome allows for evolution,
imprecision in language allows for alternative interpretations of the results.

Plus, if you want to, you can check interpretation and understanding just by
asking some questions and creating feedback.

Actually, I just don't get how this is advancing the art in either direction.

~~~
cormullion
Stephen Wolfram has designed (or overseen) the Mathematica Language and all
its many APIs for over 25 years, so its consistency is possibly better than
other languages. He probably sees the language as being close to human thought
-- it's certainly close to his. Judging from my struggles I'm not sure I'd
agree...

------
ddingus
If it's an AI, then I should be able to talk to it like I would other
intelligence.

That is how we talk to an AI. This article is framing a set of powerful
computational entities as an "AI", and that is not yet defensible.

So much of what we communicate among humans depends on implied, common
context. We know what would make sense to other people, or have a good idea of
that, based on our understanding of ourselves and other people.

When I think, "AI", I think about something that knows me other than a set of
attributes and rules, and I can know it in the same way. Imagine a little kid
type intelligence connected to those powerful compute entities and with a
great memory. Better than our memory. And it's fast.

It's that "little kid" type intelligence that is missing! We have the fast, we
have the great memory, we have lots of powerful compute entities too.

What we actually don't have is the "intelligence" to complete the idea of
"artificial intelligence"

------
caligastia
Wolfram is going off into the weeds. The whole point of NKS is that the
universe is digital. We live in a computational universe, therefore we are
living and breathing and interacting with natural (not artificial)
intelligence as a function of merely existing.

Assuming that by AI, Wolfram is referring to a computational device built by
humans, the communication has already happened in the act of designing and
constructing the machine. Real communication is over once you flip it on.

What Wolfram is proposing is to then run a Wolfram Language interpreter on top
of this new machine and use that to communicate, presumably to ask it for the
meaning of the universe. Instead of creating a lingua franca between man and
machine, this will prove only that the machine can emulate a von Neumann
computer and run the Wolfram interpreter on top. You are NOT communicating
with the machine at that point, you are communicating with a parser and a
nifty collection of mathematical routines.

Think of a parrot. You teach it to say "Polly want a cracker!" \- and the
parrot may even learn a few human words - but are you really communicating?
Can the parrot express to you the thrill of flying, can it explain to you what
a total mind-fuck it is for it to be stuck in a cage begging for crackers when
it should be flying with its flock and living its life? No, the parrot can
only ask you for a cracker, just the way you taught it.

Man will never have a relationship among equals with a machine, regardless of
how much software he pours on top - the machine has a frame of reference that
can never be communicated to a human, it's different at the electronic level,
never mind the symbolic or linguistic. The machine will have a reality so
different from that of the human that even if it could communicate something
of substance, it would be as futile as asking your parrot to help you with a
math problem - the parrot is not stupid, and in fact can do many things you
would find highly mathematical, but it cannot help you with your homework
because you are in two different mental universes, no human-invented language
can bridge them.

------
CM30
It's a good point, but if we have to basically 'program' them to get them to
understand anything, they're not particularly good/useful AIs. The vast
majority of the population won't do this, and any decent AI needs to be able
to understand them.

------
joeevans1000
"And by now people routinely ask personal assistant systems—many powered by
Wolfram|Alpha—zillions of questions in ordinary language every day."

Is this true? I didn't realize this.

------
skybrian
It might be interesting to see what language evolves when you train two or
more AI's to communicate over a connection to complete some task.

------
oneJob
I misread as "How we should talk to AIs."

Thought this was the AI (aka Stephan Wolfram) telling humans how it'd prefer
we interact with it.

------
bashedly
I'm hoping someone closer to the WolframAlpha team and work can shed some
light on this - but is the extensive use of the first person warranted here?
There's an awful lot of "I" and "I've".

Is the work being done really so directly attributable to Stephen Wolfram or
is there a army of hard working individuals behind the scenes not being
referenced here? I'm not suggesting they list everyone by name or anything,
but a simple shift to something like "our team" would seem more generous. Of
course this is all moot if he is indeed primarily or almost solely responsible
for the progress being referenced.

~~~
gohrt
Wolfram employeees sign contracts giving Dr Wolfram credit for their work, in
exchange for their pay.

~~~
bashedly
I know this is overly idealistic, but it seems to me that paying for "credit"
(especially in terms of research) seems "wrong". I understand paying for
ownership of the results of someone's work, or paying to state that the
broader organization takes credit for the body of work produced by the
employees. But for one individual to pay to personally receive credit for the
output of another seems blatantly dishonest.

~~~
prahladyeri
Copyright law deals with the situation you've mentioned in different ways in
different countries. For instance, it is just illegal in Europe to transfer
the credit of one's labor (or copyright). In the Americas, however, companies
blatantly mention in the `TOCs` that they own rights over employee's all work.

Of course, the extent to which the law is actually applied in Europe depends
on the conditions of employment and whether the employees have the time and
resources to wage a legal war against their employers.

------
beepbop
With a stern, calm voice.

