def square(x: Int): Int = x * x
should get hashed (and if it has dependencies, they should get hashed in there, too), and your programming language should track the unique identity of this function, and keep a list of names separately from the hashes.
This totally solves the localization problem: you can just write localizations in one place, and whenever you need to represent them, use the hash to look up your representation. This lets everyone (compatibly) name their functions/variables whatever they want.
Less intuitively, it also basically removes the need for builds. Every function has a verifiable unique identity, so your compiler can tell with 100% certainty whether or not something needs to be rebuilt. When you modify a function, it changes the hash, so you end up with both the old and the new function. Any code which requires the old function can still access it.
This has been implemented in the Unison programming language (https://unisonweb.org/) which is very much in alpha. I don't know if there are other platforms implementing it.
It is so elegant and freeing. I am really stoked for the future of programming languages, when localization is just a matter of translating some words.
My experience with localised codebases (in my native language) have been horrifying - like it or not terminology is developed in EN, you either get unnatural sounding "borrow" words and the translation is pointless or worse you get people coining new terminology nobody but them understands. Not to mention fragmented communities, knowledge bases, etc.
IMO localised codebases would be a regression not progress and I dread every time I see Chinese in a codebase (simply because they are a large enough market to split the dev community)
They aren't going to split it, the rest of us will adapt. English was arbitrary, so was French and Latin and Greek before it. Probably in the form of some Romaji-like equivalent (ideograms are too high a bar) but we will start to adopt it. The largest economy dictates the lingua franca because they produce the most output.
Might be arbitrary but there isn't really a good reason to change. Compared to the past, right now we have way more people than ever before from around the planet being able to communicate in a single language - English might not be everyone's first language, but it is a fine second language (if anything i'm certain that there are way more people speaking English as a second language than there are people speaking it as a first language).
The main point of a language is communication, why spoil that?
(and FWIW my first language isn't English but i have worked in a couple of other countries with other people whose first language also wasn't English - actually it was several different languages - yet thanks to English everyone was able to communicate, which i think is something to be treasured, not try to disrupt... i mean... we're just discussing things here in English after all)
My father worked in international trading. He would talk (or Telex) with people in 50 different countries each week. He always said, "English is the international language."
Yup, but there's no real reason to go from one arbitrary language to another (and I'm saying this as someone who struggles to learn languages, and English was definitely not my first).
So it would be a straight set back (overhead of switching), for no real benefit (arbitrary to arbitrary).
Another big issue is the split in resources. Right now, anyone can learn English and get access to most programming resources. You can post your code online and get the majority of the programming community to help around the world. A long time (10 years+) ago, I was heavily involved in forums for a programming language that had a large amount of Chinese developers. They'd post their code, and to help them I'd have to start pattern matching symbols to try to figure out which function was which (or paste it in my IDE and use my IDE's tools to figure it out). It was suboptimal at best. Starting over with all the community building that's been done would be a major (if temporary) set back, in a field that reinvents the wheel way too much as it is.
I realize being able to learn English is a privilege, and requiring it acts as a form of gate keeping. But having everyone on the same natural language provides a fantastic global maximum (at the cost of gate keeping at the local level), and no matter which language it is, someone will have to learn it. Furthermore, asking people who went through the trouble to learn this one, to learn ANOTHER is even worse (if also temporary)
The convenience of the millions of Chinese speakers dwarfs your inconvenience. That is why it will happen.
Already there are plenty of data sheets for electronic components where the English is barebones and there is a lot more Chinese text. Presumably, most of their customers are Chinese and thus their effort goes there. It makes me tempted to learn to read it so I can make use of it... but electronics for me is just a hobby.
CCP will make you adopt it as-is, or GTFO.
I find it interesting their approach to language compared to Japanese. Modern Japanese borrows so heavily from English, especially if you're doing technical work.
Chinese, at the governments request, hasn't done that. Instead new words are coined as needed. They're keen on protecting the language.
And that doesn't even take into account that since English (and a lot of alphabet based languages) use spaces to mark where words begin and end. In Japanese, you can have a word that consists of a kanji plus a few hiragana characters as a grammatical marker. But there's no space between that word and the next. How do you know decide where to insert a line break?
EDIT: I am asking more about the hashing function than about the language. Because if there is a recursive function, that means that one would need the hash of itself in order to compute its own hash. One can probably try to compute a hash iteratively until a fixed-point is achieved.
EDIT2: but... it is probably a bad idea to do that.
Tldr: A singly-recursive function works just fine: the current function has a name at hashing time which it can use to refer to itself. A set of mutually recursive functions gets hashed all together as a "cycle", and to refer to one of those functions you need the hash of the cycle plus an index representing which function in the cycle you want to call.
The unison crew is super friendly (Runar once patiently spent 20 minutes trying to explain adjunctions to me, a perfect stranger). I'll ask in their slack channel if this has a name.
English is as sort of like an international ISO standard to write code in - most people can read it, comment on it, audit it, etc. If people feel, they can write comments in their native language.
The entire reason why we have had such a great open source collaboration on global scale is because everyone is speaking the same code language.
def following_index(i: Int) = i + 1
def item_count_with_free_gift(n: Int) = i + 1
> def following_index(i: Int) = i + 1
> def item_count_with_free_gift(n: Int) = i + 1
Well, actually, a lot of people already do.
A pet peeve of mine is "by default".
In Italian it is translated (correctly) "per difetto", and it is "fine" when other people already know the term, but otherways "difetto" is also the translation of "defect", and you will have a tough time explaining to a layman that it is not a "defect" in that other sense.
Another "queer" word (I found it in the Italian version of some database tools) is "ometti" (which is the correct translation of "omit") but that actually I normally read as "little men".
In Excel there is "MODE" (statistical function) that is translated into "MODA" which I read as "fashion" and "MOD" (modulo operation) which is translated into "RESTO" (which has a more common meaning of "change" in the sense of what a cashier gives you back when you pay besides that of "remainder").
I think the problem you're running into is that you just aren't used to these words. I imagine people having the exact same complaints about a computer "mouse" actually reminding them of the animal or "paste" reminding them of glue, but once you get used to them they just become the terms for things.
There was one memorable time when this happened, and as I was walking the user through their Dial-Up Network Settings configuration, I asked him to right-click on his dial up connection icon, and he had trouble understanding me. I said something to the effect of, "does your mouse have two buttons?", and he replied, "yes, and I'm mashing this critter right here".
So the parallels of the peripheral and the rodent are not lost on everyone.
And you can do even worse: translate "by default" with "in caso di successo" (trans.: "in case of success")
Or also very bad: translate "by default" with "alla scrittura su disco" (trans.: "when writing on disk")
At least the new term must not mean anything else than what it should.
La moda in statistica è il corretto termine classico.
As a native speaker I would never read - in context - "ometti" as little men. IE: "Se ometti un parametro...". Every language is context sensitive.
Same for "moda", when I read: "moda, mediana e media..." I would never - ever - think that "moda" is, in the context, "fashion".
"resto" - as used in: "Here is your change" - "Ecco il tuo resto" - would never be read as such in context. IE: the nursery song  "Quarantaquattro gatti, in fila per sei col *resto* di due" is an example of unambiguous use in the context of maths.
So: in context all these words make sense, as it's the case with many words in many languages.
Maybe it’s fake etymology, but it works
Now look at programming in English and it has the same "issues". "bug" means defect when it could also mean a little critter. "super" means the parent class? How do people know you don't mean the superlative? How does "short", "float", "long" describe numbers? You "catch" things? "map" and "dictionary" certainly aren't the things I have on my coffee table by the same name. I've heard of race being a "protected class", but not a "BeanFactory" being one.
It was always arbitrary from the start.
You're making the same mistake that beginners make when they think "ugh, English is so much more precise than this language I'm unfamiliar with!" when they are completely blind to the ambiguities of English that we take for granted.
Similarly, a "file" is a bundle of papers, not an individual document. A "program" lists things that will happen in a performance, but it's not an instruction sheet. An "application" is a form that you fill in for a bureaucrat. Etc.
You can just about support the computer sense by thinking that a default choice is used when a user has failed in their duty to make an explicit choice, etc. But it is a bit of a stretch.
Of course, by now, the computer sense has fed back into broader usage, so it just seems like a perfectly ordinary use of the word.
No way: the correct translation of "by default" is "valore predefinito", anyone who translates it "per difetto" has serious issues with Italian language.
" "by default". In Italian it is translated (correctly) "per difetto" "
Correctly? Not at all!
Please don't take as given Google Translate's translations.
Nowadays it is not used much, but it has been for years in texts related to IT, a reference:
BTW "per difetto" is also used in legal matters to mean "in mancanza"
Take a look at this
The only use of "per difetto" that existed before a translation of "by default" was ever needed is in the expression "approssimazione per difetto", and the new meaning forcefully attached to "per difetto" makes sense only in the worst nightmares after a "peperonata co' e cozze".
Picchi probably is only witness of a niche neologism, that was probably bred from a bad technical translation. But it was a bad neologism in the first place, it was incorrect right from the start.
Nevertheless I take your point: you did your research before you wrote "correctly".
No, as said it is used in legal matters, example:
And you go tell this Italian professor:
The point is that it is actually correct in Italian, if you think about it also in "arrotondamento per difetto" (as opposed to arrotondamento per eccesso") the meaning is that of "arrotondamento per mancanza", i.e. meaning #1 in the Treccani dictionary while it is much more commonly perceived as meaning #2.
 respectively "round down" and "round up"
At first sight this (default = difetto) looked like so many careless anglicisms, where a faint similarity in sound and meaning induces someone into believing it is a translation.
And this happens really more than needed in technical texts. My point was that you cannot trust technical people to make good translations, an it is outrageous that a translation is considered correct just because it was written in a book. But this does not apply here.
I am making up my mind that this was instead a delicate and careful intellectual construction that dutifully just failed to convey the meaning.
So be it.
Actually, I believe the etymological root of "default" is exactly the same as "difetto."
"from Old French defaute (12c.) "fault, defect, failure, culpability, lack, privation," from Vulgar Latin *defallita "a deficiency or failure,"" 
The issue is that in English this was then used to mean "failure to pay a loan" (in 1850s) and then someone in the 1960s started using it in computing to mean the option chosen if the user fails chose."
So English just stretched the meaning way beyond the original, so it's no surprise that this new meaning doesn't really match in other languages' cognates for the word.
It's not something ridiculous out of left field.
It's probably just a matter of being used to it, because nobody would complain that mean(array) is about the moral qualities of the array (seeing mean as an adj.)
I’m Italian and I don’t agree with the author’s premise that non-native speakers have an additional translation layer (e.g. Italian -> English -> Ruby). It depends on your knowledge of the English language. As you master the language your brain becomes better at context switching; you don’t even notice it anymore (Myers and Cotton, 2002).
We should fix our schools, not translate programming languages. The technology sector speaks English. You can’t escape this language.
A large part of the english language problem in the southern Europe (it's not just an italian problem) is cultural.
The english language is considerably more embedded in the culture of the northern European countries; for example, there's considerably less dubbing.
If Italy stopped dubbing (which will never happen, because it's a tradition), italians would speak a considerably better English, just because of familiarity.
I'm not sure if there is a realistic improvement that could be applied. I'm not so critical of the italian schools; without significant immersion, it's hard to improve or retain a language meaningfully.
I suspect that a considerable factor of the problem is that Italy is not a significantly cosmopolitan country (intended as amount of "traffic" of international people), so that there is very little exercise (or "immersion", using my previous term).
In my experience:
- middle aged people speak an ok english
- elderly speak a poor or no english
- young people speak a very good one
Finally, it's correct, as mentioned in a sibling post, that German has (some) roots in the English language.
Regarding Italians I've hardly met people who actively refuse (in the sense of "dislike") the language per se, but I can't exclude the option you've mentioned of "cultural laziness".
There's a cliche of the French culture being proud of their language, so I can imagine a sort of active refusal in this case.
More accurately, English, like German, is a Germanic language. German doesn't have roots in English.
It also does not help that YMMV depending on what region or city you’re living in.
Seconded. Additionally I never met anyone who could learn the concepts of programming, but who would be stopped by 20 keywords because they're in English. Half of the words when Italians discuss IT are already in English (Internet, Router, Switch, Server, Wireless/Wifi, Smartphone), or some bastardized mix ("formattare", "reboottare", "debuggare").
And honestly, despite how it might disgust the purists of the language (who will even insist on using "Instradatore" for router, "Servente" for server, etc), I believe this is a good thing. It reduces the barrier to communication and collaboration, and you are ALREADY learning "the language" of IT, may as well have it nearly universal and get more out of your effort.
Writing code in a language other than English (I mean translating the actual code keywords, not the strings and comments) also reduces the chances someone will help you on StackOverflow by a good 95%. And many excellent books or tutorials will be less accessible to you until someone translates them.
> The technology sector speaks English. You can’t escape this language.
Writing the occasional Excel formula is probably the most advanced thing most people do on their computers, and those are translated. All mainstream smartphone apps and websites are available in German, all mainstream productivity software is available in German, Movies are in German by default, everyday life is conducted in German, work communication is done in German even in large parts of the tech sector. You can live a long, fruitful life and hardly need any English after graduation, I guess. I imagine it's similar in Italy.
I didn’t specify it because of the submission being about programming. My bad.
Anyway, yes, your experience matches mine in Italy.
The french teachers were a rotating course of young women doing the class no one other teacher wanted, suffering the worst abuse from kids.
French classes should be killed off in Canadian anglo provinces IMO. I highly doubt they have any measurable positive outcome besides teaching kids to count in french (actual competent French speaking rates are really low here despite being legally bilingual). We waste so much tax money in Canada on purely-performative bi-lingual exercises outside of Quebec and some east-coast areas.
If people really want to learn a language you really need to dedicate yourself and/or be around others native speakers at a young age.
I know many, many people who would disagree.
If you polled people who learning programming and a second language in adulthood, I'd wager the vast majority of them would say learning the natural language was harder.
Of course "learning to program" is extremely imprecise. Learning enough of a language to use its control flow constructs is a much different bar than learning enough to write an Emacs plugin or whatever you might have in mind here.
I'm a bit conflicted. On the one hand, I am all for language diversity and empowering local languages. It may also help novices and very young learners. On the other hand you are already learning an unfamiliar formalism and it hinders interoperability, sharing and collaborating. Hm..
Anyway, the coding in their applications is done with their very own proprietary shitty programming language: WLangage (https://fr.wikipedia.org/wiki/WLangage). This language has the particularity of being available in three localizations: English, French and Chinese. And you can mix all three in the same codebase for some extra fun!
It's not very well known outside of France so it's not very surprising that this language isn't cited on the wikipedia page you linked.
Side effects may include large number of bubble communities not being able to contribute to one another which leads to fragmentation and friction.
What will happen to large open source communities?
Will those have only French people working on networking and Russians working on CPUs ?
Also large companies won't be able to hire from smaller programmer pools because they will have to learn yet another spoken language.
Learning programming as a kid, it could be easier when the standard library has functions in the native language. Learning if, while etc are easy as they are just tokens. But the hundreds of functions are hard to look up when one's new and cannot easily understand what a function does based on a foreign description.
I just had a book in my native language which explained things.
While you certainly can learn a programming language without any understanding of the keywords, I don't think you can really say that it doesn't help if you do know them. My kid is learning Scratch, and I didn't have to explain to her what the "REPEAT" block would do.
(!i = 0 , 2 .. 15
r = foobar(i)
!r = 0)
?char = 'x' : result = 'y'
= copy : sub(1)
= find : sub(2)
= quit : sub(3)
Like, the source code stays in English but my IDE translates keywords to and from my language.
Personally, I don't like it. I prefer English function names.
However, the most annoying thing about Excel localisation is that the behaviour of CSV import/export changes depending on the OS locale setting. It doesn't even let you override it. You actually have to change the OS settings to get the English behaviour.
APL is however not a language that one would consider using seriously.
that would probably also decrease source code form size significantly!
* Claim that word order in Japanese suits ST-style message-passing OOP better than English: https://thoughtbot.com/blog/learning-japanese-the-rubyist-wa...
* APL, a language that uses symbols instead of keywords: https://en.wikipedia.org/wiki/APL_(programming_language)
* Non-English-based PLs on Wikipedia: https://en.wikipedia.org/wiki/Non-English-based_programming_...
* How does English proficiency correlate with programming mastery: https://www.researchgate.net/publication/277097932_A_CORRELA...
First, they make the classical mistake of concluding causation where there is only correlation. In particular, there is no discussion whatsoever about possible confounders, i.e. common factors that influence both programming ability and English proficiency. Some on top of my mind:
- Owning a computer with internet access at home
- Effort/ability from the individual students (reflected in overall grades in all subjects)
- Students' language ability in their mother tongue (can hint at students' difficulties in expressing their thoughts in a structured manner)
Second, their statistical analysis is inadequately simple. In particular, given that they only have 16 students with intermediate English knowledge, I would not be surprised if the difference in grades distribution was not statistically significant.
Third, quoting from the paper:
> In analyzing the data, the grade E for computer programming course was excluded because it did not reflect the relationship between English language proficiency and computer programming ability.
This is quite a red flag, they are essentially removing data that does not show the correlation they want! And nothing is said about these students: how many they were, their English proficiency, etc.
Finally, as anecdotal evidence, a sizable portion of my classmates were unable to come up with even the simplest algorithms, even when taught in and asked to use pseudo-code inspired from their mother tongue.
I find this conceptually wrong, at least, based on the abstract.
They're looking for a correlation:
> This research is conducted to find out whether the learners' English proficiency correlates or affects the learners' ability or mastery in writing a computer program
which is appropriate, but their conclusion:
> The result of this study can be used as a consideration in improving the teaching of English for Informatics Engineering learners
implies causation, which contradicts their intent.
My point is - there are a lot of variables at play, the simplest being that people who are good in spoken languages, are naturally inclined to learn a computer language (d'oh).
I think a very interesting way to make the research more rigorous, would have been to test with a non-english programming language (although, based on their approach, that would require a whole semester).
Am I missing something?
“affects”, as stated in the intent, is the same as causation, so, no, it doesn't “contradict” the intent.
You may believe that their results only properly support correlation and not causation, but causation was clearly within the stated intended scope of the research.
Also, correlation between English test proficiency and academic success in programming school is not a clean metric, and does not necessarily measure a correlation between “English proficiency” and “programming mastery”.
We obviously need much more localization. Particularily in error messages.
I get the need for interface localization, but I prefer no translation to a bad one. And many translations are bad.
Let's ween off sarcasm.
Thing is, the way I speak Italian is I use English as an intermediary language, because at least the tenses are somewhat similar.
Of course this strategy falls flat on its face in a more complex conversation, because spoken Italian is chock full of phrases - my favourite being "in bocca al lupo" - which taken literally translates to "in the wolf's mouth!" (note the lack of predicate here) and to me sounds no different than "Darmok and Jalad at Tanagra".
One other thing is that during my time in that country I was surprised to note that the IT job market there is actually a little smaller than in my native Poland. Weirdly enough the salaries are roughly the same - especially after taxes. I can't explain this, because I met a lot of smart and passionate about their work people in Italy.
(humorous PL based on a cult movie)
Natural language is weird, vague and quirky.
Maybe I’m biased from too many years of programming, but while I see the advantage of graphical languages (e.g. Scratch) for beginners, I don’t really see the point in translating Ruby...
This is just horrible Italian. It's a slang form and technically incorrect. I honestly don't know how people can write this sort of thing without fear of being seen as ignorant fools.
"la gemma RubyParser che parsa codice Ruby" [...] "patchando il Ruby parser"
"parsare" and "patchare" are anglicist slang terms (ab)used in programming circles. It's the sort of thing that (and I say that as an Italian living in England) just sounds grating. I understand language evolves, but there are Italian words already there for those concepts: to parse can be "analizzare", to patch can be "modificare", "aggiustare", "rimpiazzare".
Italian geeks used to poke fun at the '80s managerial classes and their pointless anglicisms....
"La velocità di processazione e traduzione del sorgente è aumentata di molto."
"processazione" is just not an Italian word - it's the first time I see or hear it, I cannot find it in any dictionary, I guess it's another recent depravation of an English word ("processing"?). In Italian one would use "elaborazione" or "procedimento".
"sappiate che verranno su un sacco di classi e di alias nuovi"
"venire su" (coming up) is, again, slang that you're not supposed to use in writing.
So yeah, the text is likely from a native speaker, but not one particularly well-versed in the language. Which makes the project even more bizarre.
I worry about the cultural hegemony of the English language. Projects like this are interesting, but I don't see a world where they overcome the inertia of English-based programming.
Ruby itself is a great example - created by a Japanese guy, but entirely with English keywords.
Maybe if China makes a concerted effort they could do it. ¯\_(ツ)_/¯
Please don't ever use it in production, ever.
While English isn't the biggest or the best language, it's the language with the must useful body of information regarding technology.
I do wonder if there’s still something to it. I can imagine that in the not too distant future we won’t check in text code files any more, instead checking in ASTs that represent the code. That would end all the silly debates about code formatting: everyone would check out their preferred format then check in something standard. Maybe languages could be treated the same way.
I once asked the Russian developers how they felt about programming in English as opposed to Russian. They all said they strongly prefer programming (both keywords and identifiers) in another language, because it feels more like a "computer language". And there really aren't that many words to memorize, anyway.
Just an interesting perspective in the discussion.
Terms used in programming languages are probably much more awkward in an English mother tongue's ears than in those of a foreigner.
I wonder if "by default" had any meaning in English before it was adopted in the IT context.
Even in English, "accessor" is a word that was rarely, if ever, used, before it was picked up in the programming languages jargon. So much that it has not made its way into some dictionaries, like Cambridge and Merriam-Webster.
fwiw, they have "default", in the computing sense of the preselected option, from 1966, while "by default" (in a legal context) has an example from 1764, "Where a defendant makes default, judgement shall be had against him by default."
It had many translations and there were many more built-in keywords.
Even then, you can learn the keywords and standard library function names easily - but to name things, that needs a different level of language knowledge.
So teaching in mixed languages is a good compromise. And it is also very helpful to immediately distinguish between things that were already there, and things have been defined in the actual program. I remember this being a helpful feature to learn language/stdlib by simply reading others code.
The more I think of it: it would be good to get back this kind of easy differentiation (maybe with IDE extension?) now...
On the other hand Scratch have localization support, and lack of syntax errors - maybe the two largest blockers for smaller children...
Not practical. Not useful. But fun. I like that.
I think English based programming is an anomaly that came out of the post-WWII US/UK cultural world order, and from that sustained economic boom came modern computers, languages and the internet to a mass market.
20% of the world speaks Mandarin Chinese (not all as a first language tho). And, about the same % as speak English natively, speak Hindi (+ dialects) natively, and again about the same amount speak Arabic, and again about the same amount (all above 300 million) speak French, tho a greater % of those have le français as an nth language.
HN, and programming, shouldn't get carried away with the myopic, "fish in the fishbowel" view of English primacy. It's really not. Not globally. Just like "white people" are not a majority. Only in a very narrow, very opinionated and specific corner of the globe are those things true. It's a big world out there, much bigger, it seems than many of you imagine from your keyboards.
Even tho HN is in English, many HNers are not native English speakers. I see people associating programming languages with English and thinking it's simply natural (if just by convention), but for most of the world, this simply isn't true, and it could have been another way. In the future it might be another way.
So I really feel it's not accurate to say there's some "problem" with people on Earth creating languages that are founded in Chinese, or Japanese, or Italian, or Russian, or Hindi, or any of the many other languages people speak. For a large corner (or even a small corner) of the world, it would not be a "problem", it would be perfectly natural.
I just don't think it's that accurate, or that useful, to think of programming and English as being somehow a natural match.
When people speak about "representation" in the "tech industry" they ought to consider this factor as well. I'm not just talking about SV, I mean "global representation in engineering". Of course, if Japanese people decide to embrace a language that somehow uses Japanese letters or characters then, there's probably not much you can do about it.
I'm just saying, don't assume it's a bad thing and don't think somehow English and programming has to go together. Certainly at the level of logic, and CS, programming is completely independent of English (tho interesting to think about how the grammar of English maybe constrained and drove initial language structures, concepts and flow control and do a comparative study of differences to languages that emerged from cultures and used other human languages.)
What are your sources for this because cross checking with wikipedia, which uses Ethnologue as a source, I find several discrepancies. For example the number of people who speak french isn't "the same amount" as native english speakers, it's 53% and that's if you include 2nd language speakers. You make it sound like Hindi and English are comparable but Hindi has very few 2nd language speakers.
In fact looking at  the choice of english looks a lot more logical than I would have thought.
The main point is that there are huge numbers of people who speak these languages and there's nothing special about programming and English beside history.
I disagree with your conclusion from your . Any of those groups could create code based on their own languages. There's nothing at all logical about English.
For my sources, just type "X speakers" or "X speakers in the world" into google and the infobox results is what I use.
Your wikipedia source is way outta date. 1.12 Bn Chinese speakers? Come on. China's population is ~ 1.5 Bn now (was 1.4 in 2018).
French is 270 million. Sorry I said above 300m, but it's within 100% of the number that speak English natively, is what I was saying.
There's nothing super special within English itself that makes it particularly well suited for programming, but the fact that it's the global language of business, tourism, air travel, diplomacy, etc. makes a good case for it. English has far, far more non-native speakers than any other language.
I don’t mean it to be provocative, I’m genuinely curious.
Chinese Mandarin has ~ the same number of speakers as all English speakers, it's just your Englo-centric bias that makes you see the world in your skewed way, where English is the center of everything, which is exactly what I'm railing against (in English no less)!
It does have international currency, but that's blurrier than you might think. French and Arabic both have enormous regional currency in EMEA. It's not the only candidate. Spanish in South America. There's plenty of places where English is few and far between (try Japan).
But the point is, even if you can say it's global (which it's not if you're taking a truly global perspective and thinking you can "go anywhere and speak English and you'll be operating fine!"), so what? There's huge populations of people who are not speaking English and they're just as good programmers, so why not have programming languages arising from their language? It's a possibility.
It's a historical accident that English with coding, that's all.
Spoken English is not so good due to complex and unintuitive pronunciation, but that's not very relevant nowadays where most media is text (and also, it would be easier to popularize a new pronunciation for English, e.g. pronouncing it as if it were Latin, than a whole new language).