
Conversation between two bottlenose dolphins recorded for first time - evo_9
http://www.ibtimes.co.uk/conversation-between-two-dolphins-recorded-first-time-what-did-they-say-1580890
======
glup
Science journalism strikes again. Between this and the Guardian's "dolphin
language discovered" the findings have been completely overblown for what is
really a trivial finding. All this article really says is that cetaceans
communicate in the audio domain; NPs are one way to conceptualize the
subunits, which are obviously far from human phonology. But we still don't
know anything about the structural properties, and hence the similarity to
human speech. The paper says "we can assume that each pulse represents a
phoneme or a word of the dolphin's spoken language." First, that's an
assumption rather than an empirical finding; second, there's a pretty big
difference between those levels of structure, and third, we'd want to know
something about syntax, semantics, or pragmatics before we call it a
'conversation'.

That said, if this brings more money to the study of cetacean communication,
then I'll accept whatever nonsense appears in the popular press. Pretty sure
they were talking about the fusion reactors in Atlantis.

~~~
cloudjacker
I think phonemes miss the mark with dolphins.

Their sounds simulatenously engage hearing senses while providing the sending
party about additional metadata about the recipient. The sounds practically
xrays the recipients body and surroundings for information about their
immediate reaction and state of mind.

This is much more advanced than anything we consider relatable.

~~~
KMag
> The sounds practically xrays the recipients body and surroundings for
> information about their immediate reaction and state of mind.

Do you mind clarifying? I think by "immediate reaction" you mean that
echolocation gives feedback as to posture and muscle tone, which makes sense,
though I'm not sure how the frequency ranges typically used for echolocation
compare to the frequency ranges typically used for communication. By "state of
mind" are you saying that state of mind produces physiological changes in the
brain that are detectable via echolocation?

~~~
cloudjacker
No telepathy, just body analysis

------
cyberferret
“For instance, on the planet Earth, man had always assumed that he was more
intelligent than dolphins because he had achieved so much—the wheel, New York,
wars and so on—whilst all the dolphins had ever done was muck about in the
water having a good time. But conversely, the dolphins had always believed
that they were far more intelligent than man—for precisely the same reasons.”
- Douglas Adams (Hitch Hikers Guide to the Galaxy)

------
carapace
[http://www.sciencedirect.com/science/article/pii/S2405722316...](http://www.sciencedirect.com/science/article/pii/S2405722316301177)

[https://news.ycombinator.com/item?id=12473553](https://news.ycombinator.com/item?id=12473553)

(Those sidebar stories! "International Business Times"?)

------
hackuser
Also in HN:

Dolphins Recorded Having a ‘Conversation?’ Not So Fast
(nationalgeographic.com)

[https://news.ycombinator.com/item?id=12522920](https://news.ycombinator.com/item?id=12522920)

------
xg15
The topic of "dolphon language" reminded me of another very interesting
approach I read about a while ago. I can't seem to find the article again
right now, but the main points were as follows:

Apparently there are hints that bottlenose dolphins perceive their
echolocation sense in a similar way than their vision sense. So in other
words, they really "see" the objects that echolocation reveals to them. (In an
experiment, dolphins were presented with an unfamiliar object in a vision-only
setting. Then they were tasked to find the object among several decoys in an
echolocation-only setting. The success rate was high above chance.)

If you assume that their auditory and visual senses are linked, the question
comes to mind: What about non-echo sounds? Do they produce images as well?
What about the sounds made by their kin?

Long story short, the article's hypothesis was that the "words" of dolphin
language might actually be encoded images that dolphins are able to "decode"
using the neural circuitry of their echolocation system.

What I find exciting about that idea is that it would give us humans a chance
to access those images. After all, we know how echolocation works, so we could
theoretically build a "decoder" ourselves. We could use this to actually test
the hypothesis and - if successful - might use the images as a much better
starting point for analyzing dolphin language than the raw sounds.

------
Geee
Here's a video of two dolphins creating a new trick and performing it in sync,
which requires creativity and communication:
[https://www.youtube.com/watch?v=YSjqEopnC9w](https://www.youtube.com/watch?v=YSjqEopnC9w)

------
Pica_soO
I wonder, could you deconstruct a foreign syntax by bringing in new stuff into
the dolphins environment that lends itself to be described by concepts that
they already had.

Like adding a piece of floating cloth, that gets described as seaweed, etc.
That way a syntax should be deriveable.

~~~
visarga
I'd rather train a language model on the audio. If it can predict the next
sound, then there is some structure there. It could act as a translator.

~~~
Pica_soO
Does it really? Or did you train your model in dolphin young bull ghetto
slang? Come into the waters of the chinese room and find out..

~~~
visarga
The Chinese Room is not a valid mental experiment because it tries to evaluate
the language understanding ability of an entity that is not an agent, then
compare it to that of real agents.

The room is not grounded in reality, doesn't optimize behavior in order to
maximize reward (is not intelligent) and doesn't have any external constraints
(need to survive and reproduce) that human have. So obviously, it can't
develop the same understanding as humans.

Also, the "codebook" in the Chinese Room was just a weak word, in reality it
means deep neural networks with generalization power, not just a hash lookup.
Searle has difficulty in understanding how generalization works in machine
learning, so he is reducing it to a caricature word.

------
chipperyman573
The article doesn't really say it, but do we have any method of translating
what the dolphins are saying? Or do we just know that they are communicating?

~~~
ZenoArrow
From the article...

"The next step, Ryabov said, is creating dolphin translators so we can have
conversations with them."

~~~
xg15
I was pretty puzzled by that line. Going from "we think those sequences of
acoustic signals might possibly consist come kind of language" to "let#s build
a translator" seems a pretty big leap to me. Shouldn't you actually learn the
language first?

In this case, we neither know how the language encodes mental concepts, now
what kinds of mental concepts dolphins would even possibly use.

~~~
visarga
It's hard to do that with human hearing, but we can use neural networks to
model the sound of their conversations and build a language model. The purpose
of a language model is to predict the next sound based on the past. If it
learns to do this precisely enough, then we have access to a transcription
from "dolphin audio" to word vector representations. Then we need to match
those word vectors to concepts based on observing how they correlate to their
actions.

~~~
xg15
I agree building a language model would be the first step - and we need to
apply some mapping to outside observations if we want to assign some meaning
to the model.

But especially the last step seems hard to me. Humans often do one thing and
talk about something completely different at the same time. Even if their
actions and conversation are related, you usually try to avoid redundancy and
will talk about things that the recipient can _not_ already infer from your
body language and/or actions. This would make correlation hard for human
languages. I don't see any rrason to believe that dolphins are any nicer in
that regard.

What I found more promising was a controlled experiment performed a time ago,
where pairs of dolphins were trained to think up and perform a choreography.
At some point, the dolphins would have to coordinate how to swim - and
researchers were apparently able to pinpoint some particular "conversations"
that looked like they were just that. So you might have some better chances to
analyze those conversations because you can make concrete assumptions about
their content.

Still, even then you're kind of behaving like an alien that tries to learn
both "the culture of humans" and the complete english language at the same
time - by observing a handful of soccer games.

~~~
visarga
Then we could identify concepts in those conversations and play them out on a
speaker, to see if they do what we thought the sound means.

------
jfoster
Makes me wonder what would happen if they play back one side of the
conversation to another dolphin. If they keep doing that, and other dolphins
respond, they will be able to get lots of dolphin conversations recorded.

~~~
joemi
Depending on how their language (and brains) works, playing back a recording
could potentially be absolutely meaningless to the point where no dolphin
would have reason to respond. That said, it probably can't hurt to try.

------
funkysquid
This is fascinating - I wonder what sort of information they're able (and
interested) to communicate to each other - food? locations? types of food?
captivity? That could become an awkward conversation...

------
foota
I got redirected or something to an ad after visiting on my phone.

~~~
finid
Couldn't read it because of the video ads.

I don't mind ads, but that was too much.

------
jkot
> _Conversation between two bottlenose dolphins recorded for first time_

For the first time? Should not be 1950 in the title?

------
Implicated
I can't wait to hear about what kind of things these creatures have to say
about us. Sentiment analysis, anyone?

