
The Shallowness of Google Translate - DanielleMolloy
https://www.theatlantic.com/technology/archive/2018/01/the-shallowness-of-google-translate/551570/?utm_source=twb&amp;single_page=true
======
peapicker
Great article. Seems a logic continuation of his 1997 book "Le ton beau de
Marot: In Praise of the Music of Language" (a book primarily about the
challenges involved in translation), which I enjoyed when it came out.

------
Isamu
> my longstanding belief that it’s important to combat exaggerated claims
> about artificial intelligence.

I think Google translate is pretty amazing, but yeah, there's no need to
declare game over. We are only beginning.

Overestimating progress goes back to the early days of computers. Seeing
impressive results, people think the computer must possess something close to
human intelligence on some level, because a human would have to be pretty
smart to perform that well.

But no, with computers with have idiot savants.

~~~
koverstreet
Yeah, with all the people making ridiculously overstated claims about what
deep learning can do I'm pretty sure we've got another AI winter coming.

And it's a shame. I really want to see more genuine progress in AI research; I
really want to understand _what consciousness is_. But this boom/bust cycle
that happens every time there's a tiny bit of real progress is a painfully
inefficient way of getting there.

~~~
YeGoblynQueenne
If there is a winter, I think it's going to be very different than in the
past. What people used to call a "winter" was a drying-out of the funding for
AI research. However, in the past, this research was funded primarily by
public money, and specifically by defense budgets. And it was cut when
scientists failed to produce the army of super robots the generals thought
they were promised. In the present however, there is a lot of money put into
AI research by the industry - Google, Facebook, Microsoft, Amazon and IBM, as
well as many other, smaller companies.

The amount of investment in AI by those companies is simply unprecedented and
so is the number of people who -drawn by this river of dosh, like moths to a
flame- are pursuing AI as a career (even if that only means the statistical
machine learning side of AI that those companies invest into).

What this means is that the current branch of AI has become "too big to fail".
And that has nothing to do with how successful it is. As long as it can be
monetised and the industry leads can show some return to their investment,
"AI" will keep growing.

A winter, if it comes, will be a winter of knowledge- not of funds. We will
end up with so much unusable, meaningless, laughably bad "research" that any
significant contribution to knowledge will simply be buried under a ton of
rubbish, never to be found out.

So the money will keep flowing in. But what will come out the other end will
be utter nonsense.

------
DanielleMolloy
Automatic image captioning research has some very impressive results in the
recent years. Also there are other ideas in this domain that can handle
language quite well (e.g. visual question answering). Interpreting classes in
images as symbols, it appears as if symbolic and statistical approaches can in
principle be combined. Wondering whether this can somehow be transferred to
learning language for automatic translation.

------
Jyaif
The Shallowness of Douglas Hofstadter.

> I am not in the least eager to see human translators replaced by inanimate
> machines. Indeed, the idea frightens and revolts me. To my mind, translation
> is an incredibly subtle art that draws constantly on one’s many years of
> experience in life, and on one’s creative imagination. If, some “fine” day,
> human translators were to become relics of the past, my respect for the
> human mind would be profoundly shaken, and the shock would leave me reeling
> with terrible confusion and immense, permanent sadness.

If (or more likely, "When") computers translate better than human, respect for
the human mind should _increase_. It's quite a tough problem to solve, way
harder than manual translation which thousands of people have been doing for
centuries.

~~~
jamesmcintyre
I believe Hofstadter is more worried about people prematurely adopting this
technology en masse simply because, by it's inherently impressive nature and
mysterious inner-workings, it generates its own terrific PR which clouds the
perception of its actual efficacy at a deeply cognitive task.

Kevin Kelly in his book What Technology Wants speaks to the same willingness
for human societies to so eagerly jump into new, unproven or potentially
dangerous technologies without ever stopping to consider the effects of the
adoption and log-term usage on their societies. A whole chapter is about how
the Amish culture is not necessarily anti-technology so much as they are slow,
disciplined adopters of technology. They hold their people's culture above all
other values including productivity or connectivity and so to them it's a
clear decision to at first be skeptical of new technologies and to deliberate
the adoption of these technologies until the elders of the community can come
to a consensus. Even then they usually allow small "pilots" or testing of the
technology with a select few individuals at first. Kelly doesn't suggest we
adopt the Amish culture, simply that there is something to be learned here.

I think A.I would need to become a fully robust form of consciousness capable
of generating its own novel ideas spontaneously and to form a sort of gestalt
from a mix of information, logic, patterns but also "feelings" (or whatever
term would be used for "feelings" in a conscious AI) in order to produce the
same type of translation a human can. Granted, even without consciousness it
will likely come extremely close over time with more training and improvements
to the algorithms but as Hofstadter explains it takes more than a sort of
algorithmic proficiency:

"..To me, the word “translation” exudes a mysterious and evocative aura. It
denotes a profoundly human art form that graciously carries clear ideas in
Language A into clear ideas in Language B, and the bridging act not only
should maintain clarity, but also should give a sense for the flavor, quirks,
and idiosyncrasies of the writing style of the original author. Whenever I
translate, I first read the original text carefully and internalize the ideas
as clearly as I can, letting them slosh back and forth in my mind. It’s not
that the words of the original are sloshing back and forth; it’s the ideas
that are triggering all sorts of related ideas, creating a rich halo of
related scenarios in my mind. Needless to say, most of this halo is
unconscious. Only when the halo has been evoked sufficiently in my mind do I
start to try to express it—to “press it out”—in the second language. I try to
say in Language B what strikes me as a natural B-ish way to talk about the
kinds of situations that constitute the halo of meaning in question."

I predict we will prematurely adopt machine-learning powered translation at
the cost of loosing depth, clarity and the richness of the human expression of
ideas but for the majority of us the impact may be minimal especially if great
human literatures are still translated by humans.

~~~
koverstreet
I think it's just as much that Hofstadter doesn't want people making
overstated claims giving a bad name for AI research (again!) when people
finally realize how limited the current approaches are.

~~~
jamesmcintyre
I agree it's probably just as much a warning to prevent overstated claims in
general as it's harmful for numerous reasons.

