Reading a review on booking.com today I have come across a strange thing: something that when translated by machine, says the opposite of what it says in the original language.
The phrase was: we didn't sleep too well; it was strange because it was the first sentence of a top-rated review. Why would someone give a top score to a place where they didn't sleep well?
But it was an (automatic) translation from French. The original French version was: on n'a super bien dormi.
That's incorrect: it should read on a super bien dormi, which means "we slept really well". But when said out loud it should sound like on na super bien dormi because the n at the end of "on" is supposed to create a "liaison" with the following "a" so that it's more fluid.
Yet, when written as "on n'a super bien dormi" then the extra "n'" could be construed as a negative -- most French native speakers will intuitively understand that it's not, it's just a spelling mistake; but the automatic translation machine doesn't (at least on booking.com: I just checked on Google Translate and it doesn't fall for it!)
This got me thinking: would it be possible to craft special texts that would mean one thing in one language, and a totally different thing when machine-translated in another language? That would be quite funny -- or terrifying, depending on the context.
> This got me thinking: would it be possible to craft special texts that would mean one thing in one language, and a totally different thing when machine-translated in another language? That would be quite funny -- or terrifying, depending on the context.
This is the subject of age-old jokes. Here's one that I know off the top of my head, but I'm sure plenty of others have their favorite joke translations.
Rumor is that USA translators first attempt at automatic English to Russian and back to English was the Biblical Phrase: "The Spirit is willing but the flesh is weak" (Jesus's parable about how even if you want to do the right thing, it feels like we act against what we want to do sometimes).
Allegedly, translated into Russian, and back into English, results in:
This happens a lot with Japanese translations, mainly because the subject (eg. you, me, him, etc) is frequently omitted from a sentence, and needs to be inferred from context. Machine translations rarely have that context, but need to insert a subject to form a natural sentence in English, so they'll just guess. For example "he's not hungry" can machine-translate into "I'm not hungry", and sometimes this can really mess with a sentence's meaning or create major miscommunication. A human translator would at least warn you when there's ambiguity, but machine translators tend to just go with their best guess.
I've had this happen to me a few times, also with French <-> English translations, but I don't think it was due to typos in every case.
One case was a terms and conditions thing, where the translation claimed that the supplier would assume all liability for misuse of the product, which was clearly not what the French (or common sense) intended.
I liked the article. Part of what he's touching on is actually far more general. Language translation/speaking is just one skill a human can acquire though. For him, part of what gives his life meaning is being able to write a letter or have a conversation in another language, and the fact that this is hard is part of the meaningfulness of it.
When everything we do becomes easy and we can just ask an AI/robot to do it for us, how will humans replace the meaning that comes from skill acquisition/performance?
The above translation is common, but I always thought it was not ideal. The much more common use in french for "perdre" is not "to waste", but "to lose", as in "The time you lost ...".
I think this captures the spirit even better, since the time sure is lost, but not wasted.
Does knowing an AI can do something reduce our enjoyment of doing that thing ourself? I don't know if we have enough data/experience to say.
An example in my own life: an AI I'm sure can perform a piano piece (via MIDI) better than I can, but I've literally never considered this has any bearing on how much I like paying piano.
In the very long term (hundreds of years) it seems like it will really be optimal to have a single shared language for people of Earth to communicate with each other easily and effectively. Not sure how to balance this with the cultural value of language. But it just seems like it can't be helpful to understand people less well. Seems like translation is a stopgap on this journey. Also seems dangerous for AI to be intermediating all human communication – who knows how it might subtly adjust your words?
>In the very long term (hundreds of years) it seems like it will really be optimal to have a single shared language for people of Earth to communicate with each other easily and effectively.
I disagree. I'd argue that a monolingual species is harmful for the same reason monocultures are harmful in general.
Language diversity facilitates cultural diversity. It's a "memetic speed bump" that enables the isolation needed for cultural divergence.
On a hundred-year timescale, I'd much rather see hundreds of new, diverse cultures bloom than some sort of bland Facebook-y monoculture where everyone is communicating with everyone else all the time. I'm hoping in the very long run, space travel will help facilitate cultural diversity. E.g. Mars is ~10 light-minutes away, meaning Earth/Mars phone calls aren't really feasible. But until then, we should preserve the cultural diversity we have, and fight homogenization.
But cultures don't have to be aligned with language. We have many internet subcultures and such that are geographically distributed but able to form because of shared language.
And, rightly or wrongly, that shared language will likely be English. It is already the language of commercial aviation, and it is the language that people in Europe (any other places presumably) use to talk to each other when they don't share a native language. And sometimes even when they do.
It is in some ways a bit peculiar, with a simple grammar but extensive vocabulary (much of which thieved from other languages). It is on the face of it quite ugly, however great poetry and works of literature have been created using it. Some non-native speakers say they feel freer using English instead of their native tongue, but of course that is a somewhat nebulous proposition hard to substantiate, even if there is a little truth to it.
The term thieved is too strong here. Languages borrow words all the time, like how various English words have made it into other languages.
Many of the French words came about due to England being invaded by he Norman French. A lot of food words come about due to using the native terms for those -- this is why the meat words are French based and the animal words are Norse/Germanic.
In Florida, you see a lot of Spanish words entering English due to the cultures co-existing and mixing. This happends a lot near country borders where you see languages blend together as the different speakers mix.
Do creole languages (of which there are many) steal the words from their parent languages?
The relatively simple grammar is a huge advantage. The major complaint I’ve heard from learners is about the spelling. It would be nice to reform the orthography, like the Academia Real in Spain did
There’s always pressure towards a shared language for commerce/law (and this varies with political winds) but everyday communication is just negotiated among the people you encounter regularly.
Humans are way too clever and adaptable to receive and reify one language and will always be tuning in to accidental quirks and personal optimizations. There will always be many languages, some old and some new, because people will always live within the practical little social bubbles that get them from today to tomorrow.
I think this would be a net loss for humanity. If the Sapir–Whorf hypothesis is true, there would be less variety of ideas and less creativity in a world where everyone speaks one language.
On a funny side note, I recently overheard one Spanish speaker casually comment to another about how people's personalities seem to change when they switch which language they're speaking. It was wild to see someone who's never heard of that hypothesis just invent all on their own.
I don't think that's necessarily Sapir–Whorf at work. It could just be behavioral conditioning -- the same way you might have a different personality at work, vs when visiting your parents for the holidays.
Implying the effort and capital will come through go preserve those obscure languages, especially ones that originate from geographically isolated places. I'd love to see it in places like the Philippines where the aboriginal script and tribal patterns and designs are coming in vogue now.
hose languages have barely any presence online, so there isn't remotely enough data to make a good translator... just look at Whisper accuracy for less institutional languages, for a reference
Douglas Hofstadter is one of my childhood heroes. I have devoured all of his books and always felt that his curiosity about artificial intelligence was similar to mine.
Now we made this huge jump in AI capabilities, and I am really surprised that he mostly seems to have negative feelings about it.
To me, it is quite the opposite. With these new LLMs, a childhood dream has come true. And I am mesmerized by the progress and possibilities.
I wouldn't see it that way. When the telephone came out, there was a burst of, "Well, that's it for people visiting each other in person!"
I didn't read this as him saying, "We won't learn new languages," but rather, "We won't be forced to, and learning a new language is a wonderful thing. In our gain, let's not lose sight of what we're losing."
Star Trek imagines that even in an age where computers have universal translators, people will still learn languages for "personal development" purposes, or to honor their ancestry, or simply to not have to rely on a computer for a regular activity. Ideally, the translation ability of AI should make it easier for us to practice conversations in new languages, and actually encourage more language learning through total immersion without having to travel to a country where that language is the primary.
Whether Star Trek is aspirational or inspirational, I guess we'll have to see.
Yeah... I think there is a very fine and blurred line between using AI to augment / enhance something, and to completely replace it. And generative AI, kind of leans more in the later... at least, in the way it is being used now:
Machine Translation is great and can mean the world for minority languages, but now everyone language learning app has to have a chatbot, which, I think, it does replace human conversation when learners say that now they don't need tutors etc.
He wasn’t an AI practitioner, he wrote about it and I’m somewhat glad he hasn’t said a lot about the current machine learning advancements. Many people accomplished in their own scientific field have tried their hand at commentary or criticism about AI and come off as completely clueless and uninformed. A Nobel prize in physics does not qualify one to wade ankle deep into machine learning and start pontificating about this and that.
Hofstadter was an early and lasting influence on me too. The most consistent thing in his writing is a sense of intellectual joy, playfulness, and curiosity. When tools or toys are venues for people to create those experiences he's excited about them, and that was the position of "AI" for most of his career.
He now perceives AI to be a tool that will limit, deny, or replace these feelings & experiences, and so has shifted his relationship to its potential. I'm slightly surprised that he sees AI this way, but not at all surprised this is how he feels about it given that he does. I think it's too early to say if he's right but his feelings are certainly consistent with his views and long history of playful curiosity.
Yeah, I don't understand how Douglas Hofstadter came from [0]
> There may be programs which can beat anyone at chess, but they will not be exclusively chess players. They will be programs of general intelligence, and they will be just as temperamental as people. "Do you want to play chess?" "No, I'm bored with chess. Let's talk about poetry." That may be the kind of dialogue you could have with a program that could beat everyone. That is because real intelligence inevitably depends on a total overview capacity-that is, a programmed ability to 'jump out of the system", so to speak-at least roughly to the extent that we have that ability. Once that is present, you can't contain the program; it's gone beyond that certain critical point, and you just have to face the facts of what you've wrought.
to [1]
> Although my prediction about chess-playing programs put forth there turned out to be embarrassingly wrong (as the world saw with Deep Blue versus Kasparov in 1997), those few pages nonetheless express a set of philosophical beliefs to which I am still committed in the strongest sense.
and now to
> Why would anyone want to devote thousands of hours to learning a foreign language if, by contrast, they could simply talk into their cellphone and it would instantly spit out “the same message” in any language of their choice, in their own voice, and with a perfect accent to boot?
not to mention his other articles recently. It's as if he forgot that there are still human beings learning, playing, and competing in chess or go, dedicating thousands of hours of their lifetime to it.
This really betrays Hofstadter's age and the fact that he's being left behind by technology and by the modern world.
It's really sad to see something like this happen to him.
Instead of seeing it as everyone having an amazing language coach in their pocket. Or finding ways to radically accelerate language learning. Or of bringing everyone together but still having those few core languages you know (you will always be ignorant of most languages). Or of saving languages that might otherwise die. He focuses on a hypothetical negative.
This take is both glib and ageist. Hofstadter isn't writing this because he's old. It's because he backed the wrong horse.
Hofstadter was a brilliant and up-and-coming AI researcher in his 20s, wrote an amazing book, started the Fluid Analogies Research Group, and was going to create the next generation of AI based on the power of analogies.
Then for decades -- nothing. His work on analogy was completely overtaken by deep learning. Recently there was an article on his partial recanting of the whole idea of analogy as the core of cognition, something which has been foundational to his research.
Wordplay and translation always figured prominently in both his popular writing and his research as examples for the power of analogy. He has a whole book on translation, for example.
So it's hardly surprising that when the very "stupid" approach of backprop on a neural net can work with analogy better than any system he's ever created that he might focus on the negatives. His entire research project has been usurped.
The insult-people-for-being-car-skeptics trope is starting to show its age [1].
It's true that old fogey skeptics who have vested interest in the old ways are disposed towards motivated reasoning, but they're also the ones most able call out the motivated reasoning by the few who make disproportionate profit most from pushing new technologies.
> they're also the ones most able call out the motivated reasoning
I strenuously disagree. The people with most to lose have the least objectivity and the greatest tunnel vision. There’s no silver lining to such bias. Being able to shoe a horse doesn’t give you special insight into civil engineering and sociological trade-offs.
The phrase was: we didn't sleep too well; it was strange because it was the first sentence of a top-rated review. Why would someone give a top score to a place where they didn't sleep well?
But it was an (automatic) translation from French. The original French version was: on n'a super bien dormi.
That's incorrect: it should read on a super bien dormi, which means "we slept really well". But when said out loud it should sound like on na super bien dormi because the n at the end of "on" is supposed to create a "liaison" with the following "a" so that it's more fluid.
Yet, when written as "on n'a super bien dormi" then the extra "n'" could be construed as a negative -- most French native speakers will intuitively understand that it's not, it's just a spelling mistake; but the automatic translation machine doesn't (at least on booking.com: I just checked on Google Translate and it doesn't fall for it!)
This got me thinking: would it be possible to craft special texts that would mean one thing in one language, and a totally different thing when machine-translated in another language? That would be quite funny -- or terrifying, depending on the context.