It is also one of the reasons why English is taking over (and in less serious cases, borrowed from). English is fully developed for social, scientific, technical, complex, specialised and precise communication to an extent rivaled maybe by only a dozen other languages.
On the Northern Sotho radio stations you often hear kids say, when asked about colours, "Ke ye blue" ("It is blue"). The radio presenter would sometimes correct the kid and say "Aowa, ka Sesotho re re ke ye tala." However, there is something deeper at play and something perhaps unassuming yet complex: the kid is more accurate.
The value of languages like these is not the ability to convey or store technical or scientific information. But if a language disappears is like losing in one shot the whole of Beethoven's work (and a bit more than that). Sotho may have less colour words (yellow=brown too) and also it doesn't have delimiters in the sense of "far, further, furthest" (having only "far", "further", and then emphasis) but it has... verb modifiers!
For example: use = to cause to work. Repair = to cause to be good. Understand = to cause to hear.
Wow, am I telling you that verbs convey and encode causality in these languages? Yup, that's right.
I don't think this would be accepted by very many linguists. Complexity and precision is a function of vocabulary, and vocabularies are easily transferred, as your own example shows.
English often proves itself deficient when it is taken over by new peoples. Especially for describing the social realities of the non-English speaking world - but of course, non-native speakers will patch up these deficiencies.
Finally, I am very sure that no linguist would agree that English is taking over the world because of this or that aspect of its vocabulary or the (supposed) simplicity its grammar. It is doing so because it is useful and has a high status due to social and political factors.
But the advantages of English is in a "modern" society, whatever that may be. There are deficiencies in these languages even that are not just about vocabulary. Things like "I have never" or "I would never" are a strange example.
In the dialect where I come from, you say in Sotho: "Ga ke tsebe ka ya kua." This means: "I don't know about going there." But contextually it means that you have never been there.
I am not saying you can't write a scientific paper in these languages at all, but I am saying that you need to standardise the languages better before you can do that. At the present state of things, esssentially, you can't write scientific papers. By the way, Afrikaans was wittingly and deliberately developed into a scientific language. It did have a significant head start though, having been mostly derived from Dutch.
An example of superiority of Sotho is in idiomatic language. There are running metaphors that don't even come close in English. It kind of makes sense, because if you think about it, precision of meaning trades off idiomatic richness.
There was a funny meme that essentially illustrated how different Sotho grammar can be, and how your figurative use of language can essentially hide meaning through ambiguity. 
 https://www.youtube.com/watch?v=GCU2Uh42qS4. Explanation: She is actually just saying that she stole her mom's money that should have been used to make pots with for people that had paid. Literally: "I did the thing that made that the pots couldn't be made."
The way to make a language fit for writing a scientific paper in it is to write a scientific paper in it, and whenever you're at a loss for words, either make them up as you go along or take them from another language. The latter option is usually chosen by people already fluent in that language, which is how English scientific vocabulary got its multiple layers of Arabic, Greek and Latin loan words.
However, loanwords can render the underlying logic incomprehensible. For example, I recently watched the Japanese series アンナチュラル annachuraru (Unnatural [Death]) on a Chinese streaming platform (because that one didn't have region locking blocking European IP addresses). When the word エチレングリコール echirengurikōru (ethylene glycol/ethane diol) came up, obviously someone had to ask to repeat it, because it's such a long and complicated foreign word. The Chinese commenters reading the subtitles were very amused by that, because the Chinese translation is 乙二醇 yĭ èr chún and has a systematic composition: 乙 yĭ is used like the Roman numeral Ⅱ or the Latin letter B in enumerations and indicates here an organic molecule with two carbon atoms. 二 èr is the number two. 醇 chún indicates alcohol. Essentially a mirror image of "ethane diol" that even someone without much knowledge about chemistry can understand.
The only other point I would like to add is the actual internal variation that would complicate a scientific article. The dialect that is typical of my home area and the dialect that is taught at school have incompatible spelling due to pronunciation, but is verbally compatible with some amount of deliberation.
An example would be "Ke tla go bona." (I'll see you [later]; correct spelling) That's actually pronounced "Ge dao bona" in the variation. There are not enough people interested to find a way to get around the standardisation issue and, in fact, whichever dialect becomes standard is inherently a form of discrimination. This is what has happened to the Sepedi dialect of Northern Sotho. It has become the politically correct term. I've heard many people introduce themselves as a Mopedi (a Pedi person) while I would later point out to them that they are from a different dialect, which they would immediately concede: they meant it as umbrella term. In their defense, there is a difference between linguists and between actual users of a language. I guess you could say Sepedi is colonising the other dialects, though without any violence as the term might insinuate.
One reason might be that "Sesotho sa Leboa", which is the correct term for the language, Northern Sotho, is almost never used in verbal conversation. The term was almost certainly invented by linguists. The most common term would be "Sotho", but due to the confusion with Southern Sotho (a separate language) the default then instead becomes "Sepedi". It is reminiscent of "Castellano" vs. "Español".
For the curious: sheng-wen-sen-te-he-ge-lin-na-ding-si. 圣 sheng is a fully conventionalized loan-translation of the Christian word "saint". 文森特 wen-sen-te is a sound transcription of Vincent. 和 he is an ordinary Chinese word meaning "and". 格林纳丁斯 ge-lin-na-ding-si is another sound transcription, for "Grenadines".
>  https://www.youtube.com/watch?v=GCU2Uh42qS4. Explanation: She is actually just saying that she stole her mom's money that should have been used to make pots with for people that had paid. Literally: "I did the thing that made that the pots couldn't be made."
This literal rendering isn't far off of normal (if inelegant) vernacular English. "I did the thing that made [it] [so that] the pots couldn't be made."
Also, in "made it" in English, usually "it" should be specified whether it is the subject, object or perhaps a neutral referral to the situation. In Sotho there is a different approach whereby you have noun classes and they have concord with the rest of the sentence. That way, you can infer which "it" you are talking about by the different nouns hopefully being in different classes and "it" thus being qualified. This can in some cases make a sentence less ambiguous than the corresponding English sentence. Essentially, you then have multiple versions of "it". Humans are always in separate classes from objects, which again are in separate classes from for example plants. You can point out of course that it's unlikely that a plant is both the subject and object of a sentence. The language would be sensible even without these classes, but it makes it much more rhythmic and easier to follow the gist of a discussion.
A possible original version would be: "Ke dirile selo seo gore dipitša ga di dirwe." In this case you have "seo" in concord with the thing being done and "di" in concord with the word for pots.
English has taken over because, in our local history so far, because it's been the official language of the last two main world military (hence cultural & business) powers: UK for 19th century, US for 20th century - boosted by WW I & II & furthering globalization.
It's been also shaped by this globalization worldwide.
In 1, or 2 centuries, it very well may be that another language will have taken over.
Well, English is a pretty simple language, grammatically. Compared to its Germanic neighbours, it doesn't have gendered nouns, it doesn't have the funky word order of German, nor the funky word order of the V2 Scandinavian languages. It has fewer letters in its alphabet, and it has fewer phonemes.
It does have a very large vocabulary, mostly due to the French influence, and it's in dire need of a spelling reform to fix all the accumulated crap, but those are hurdles for writing English or mastering English. When it comes to speaking simple English, the bar is lower than what is required for speaking simple German, or simple Norwegian, for example.
It's also pretty forgiving when it comes to word order, which means understanding bad English is also easier. So I think there's some sort of lowest common denominator merit to it that made it easier for it to take over.
Just compare the delicacies of the differences between British English & American English, if not only for the different understated meanings.
The "funky word order" of German makes sense from a German perspective (and back to ancient Latin & Greek).
Mandarin is not simple, comparatively to English.
Yet, you can be quite assured that, would China, for instance, get the military & cultural influence UK/US had for 2 centuries, by 2200 they wouldn't have spread something else than a globalized Mandarin worldwide (as it happened for English; and French and Spanish before them).
For sure, but my point was that the simpler (relatively, comparatively) a lingua franca is, the less pidgin you get, and the more of the actual language you get instead. And that's a merit of the language itself, not the powers that back it.
Compared to Indo-European languages. Look at other language families, and our insistence on articles, inflection for tenses, plurality, and case (in the case of pronouns) looks unnecessarily complicated.
And then there is the insane polysemy of English words. "get" has a wide variety of meanings, some of which are completely opposite from one another!
> When it comes to speaking simple English
I should also point out that English has relatively complex phonotactics. A word like "strengths" is excruciating for anyone whose phonology doesn't have consonant clusters in the first place.
Getting English right is a fractal of traditions and exceptions, but its MVP is very forgiving.
Yes, that was exactly my original point, thanks for explaining it more hacker-news-y than me. :-)
Or Greek before? https://en.wikipedia.org/wiki/History_of_science_in_classica...
Or Chinese? https://en.wikipedia.org/wiki/History_of_science_and_technol...
There are various reasons why a language may become a lingua franca (LF). If anyone wants to more about the history and process of this, I recommend The Last Lingua Franca: English Until the Return of Babel by Nicholas Ostle:
He goes over past LFs like Persian, Aramaic, Ancient/Koine Greek, and how they waxed and waned over the centuries, and what may happen to English in the future.
I agree that English vocabulary is very well-developed for concepts in the modern world, but I hope that readers will recognize that it is not this way because of some essential superiority of the language, but because many people are using it for precise communication.
Imagine for example that the entire population speaking English were to die off but for a few rednecks who don't understand much of the world, and you will find that English as is spoken by them is too imprecise to apprehend most things.
Or when a casual conversation steered into specialist territory, and those of a different profession felt like the conversation was Greek to their ears. English speakers would need to be trained in your profession's jargon and idioms to be able to understand what is being discussed. Here the value of English is that there is already a community of people who understand this idiom, whereas to discuss the same matter in another language would require fixing jargon and idioms that nobody is already conversant in.
That is, it is the existing communities of speakers that understand specialized English that makes it developed for precise communication in a variety of advanced topics, and not some essential superiority of English in terms of how it interacts with each individual's minds that other languages cannot potentially achieve.
This is an important point to me and something often misunderstood, even (and especially) by mother tongue Northern Sotho speakers. Perceived superiority is sometimes mere utility. As far as I know, the spelling of Northern Sotho is derived from German missionaries (though perhaps also influenced by Dutch). This has caused Northern Sotho to be incompatible with English spelling. A prudent 7-year-old Mosotho would use this as an argument to rather learn English spelling, but also to make the mistake of discounting their language based on what is essentially just an unfortunate historical choice of orthography. Contrast this with Shona, which uses English-derived spelling. The Shona people generally have impressive skills in English spelling and eloquence in the use of English. (Though this is also due to Zimbabwe being influenced more by the British than anyone else.)
I think we need to admit that language itself is something that a young learner sees through the lens of utility and that superiority as a social concept would always be grossly subjective. It is funny how "supremacists" often prosecute the most competent people in their own culture (due to their competent brethren being mostly nosy and challenging towards the supremacists).
> For example: use = to cause to work. Repair = to cause to be good. Understand = to cause to hear.
> Wow, am I telling you that verbs convey and encode causality in these languages? Yup, that's right.
This isn't really a feature of modern English, but it certainly isn't something exotic. It was a feature of earlier English, and the effects of that are still obvious today.
For example: raise = cause to rise. Fell = cause to fall. Lay = cause to lie. Drench = cause to drink.
(Causative verbs are alive and well in modern English, but they are more likely to be zero-derived than to use a separate overt grammatical form.)
In Northern Sotho it gets pretty far:
nyaka = want
nyakišiša = investigate
Now in modern usage, the color orange is named after the fruit, but the name of the orange fruit differs by region. Some places refer to oranges as tangerines, with no differentiation between oranges/tangerines, so the word for orange is actually tangerine in those regions.
There's a good write up of other colors here: https://everything2.com/title/Chinese+colors
This is also true in European languages. (Including English.)
In this case, it's possible that the different conventions for what color to call an egg yolk originated from egg yolks that were different colors. Crack an egg in China and you'll get something that is obviously different -- and much redder -- than what you'd get from an egg in the US.
This freaked me out enough that I exclusively purchased high-end eggs from City Shop, which were a reassuring yellow.
Imho, indigo and violet are the same thing (at least very close), and cyan is missing, so red, orange, yellow, green, cyan, blue, violet would have made more sense
But of course it's indeed quite possible the language shifted and it made sense in Newton's time
I'm not a native English speaker though, and as far as I know neither cyan nor indigo are commonly used in standard everyday language, but more in their own niches (printing, clothing, ...)
The wiki article gives the following etymology:
> The word chromatic comes from the Greek chroma, color; and the traditional function of the chromatic scale is to color or embellish the tones of the major and minor scales.
But this feels a bit weak to me. Would be nice to have a more authoritative source.
There's 16 different shades, or colours, there. If I was to point to any one of them individually and ask my young children what colour it was they'd almost certainly say "blue". And I'd understand them fine and consider it correct. Likewise if they were explaining something they saw during the day and said it was "blue" I might make an assumption about which of these shades it was, but I intuitively know it could have been any of them. And most of the time the distinction isn't that important for understanding and sharing experience.
When the distinction is important my kids would probably simply say "light blue" or "dark blue". Additional adjectives will get used to clarify the relative difference between the colours.
Soon they'll learn "sky blue", "baby blue", "navy blue". Then teal, turquoise, aqua, cyan, cerulean, etc.
Assuming the language has those words. That only occurs when the need to distinguish is common enough to established a shared understanding across a large enough group of people that they effectively reach a consensus that it's now a thing, like English speakers did a few hundred years ago with the introduction of the colour orange. Nobody invented a new colour, we started using a new word to describe something that had always been.
I read a book a few years ago called Alex's Adventures in Numberland (https://www.amazon.com.au/Alexs-Adventures-Numberland-Alex-B...). In it he has a story about a group in South America who have no words in their language for a number greater than two (or maybe it was three? It's been a while since I read it). Anything larger than that was just referred to as "many". It's not as though seeing more than two of anything was uncommon, most families would have a half dozen to a dozen children. But if you asked how many children they had it was just "many". Whether it was eleven or twelve just wasn't an important distinction to them.
He goes on to discuss how language can expose what's important to a group and shape thinking. The introduction of a concept and word for zero was hugely important for our advancement in all number of fields. He also discusses how our constant pursuit for ever increasing levels of specificity has it's trade-offs: we seem to be becoming increasingly bad at estimating (which is both language, social expectations around what we value, and a reliance on tools).
Anyways, it was a story about language and numbers that I thoroughly enjoyed.
Did you perchance grow up as an English speaker, or a speaker of a language with a set of colour terms similar to English?
> Surely when you look at grass and the sky you feel you need different terms to describe them
Well, I'd guess that all languages have different words for sky and grass. The difference is how you relate those words to words for other things that have similar colours. There are many languages with less colour terms than English, but also some with more - and as far as I remember it's usually green and blue that have more shades, if you'll excuse the pun. Like the slice of spectrum covered by green-blue in English will be covered by more words in some languages.
That probably depends on how frequently do you need to describe something as "sky-colored" vs "grass-colored". I can't really think of many things in nature that are blue (some people's eyes, the occasional flower or gemstone) so if you don't need that word to describe anything else you might just leave it at "the sky is a weird shade of green" rather than having a color that only describes one thing in the universe.
It stands out against cardboard and all the variously-colored plastic boxes I own, and it's light enough to offer great contrast with a black Sharpie on it.
So for me, that color is useful specifically because it's relatively unique.
FWIW, in Italian the sky is commonly referred as 'azzurro' (azure), except when it really is a deep blue, while a blueberry would definitely be 'blu' (blue). So, yes, in Italian we tend to distinguish the two colors more than, for example, english. Which I guess is the point you are making.
Sometimes maybe. Most of the time, who cares? Grey, clear or dark skies seem most important to distinguish. Do we mean the temperate daybreak blue, midday tropics blue, or depth of full moon night blue? Grass and other plants can have degrees of blue in there too. What about sea? Sometimes blue, sometimes green, most of the time somewhere in between. What probably matters most to a mariner is swell and temperature(?).
I suspect it only really started to matter after Perkin's mauve in the mid 19th century, and matters far more now in a world of a trillion pantone shades.
Primary colors are arbitrary, the only reason there's three is possibly because we have three types of cones, but really to cover most of the visible spectrum.
So yeah, color spaces are a thing.
Cone sensitivity doesn't line up with primary colors, and some are more sensitive than others.
Tetrachromacy exists in humans, so some people can see more colors.
Magenta and violet are very different. Magenta isn't a real color, two colors we interpret as one.
The sky is actually blue and purple. If you look hard at it you can see both, kinda like seeing RGB on a white LCD.
And I'm sure there's more.
It's more accurate to say magenta is a color that can't be produced from a monochromatic light source. It's a mix of ~450nm and ~650nm light. It's still a color, though; the term "color" is more akin to the final output of the brain, as opposed to the light entering the eyes.
I thought about it for a moment. If these are the primary colors, why aren't they what printers use? Printers use cyan, magenta, and yellow. And there's a great symmetry there, since those are all the secondary colors to light's primary red, green, and blue, which are the colors used in computer monitors. And the only thing that makes these colors primary to us is that they're the colors that the cones in our eyes perceive.
So I decided to experiment. When I made my color wheel, I substituted blue for cyan and red for magenta. The color wheel I produced was much more vibrant and beautiful.
There are a lot of things my school teachers tried to teach me that show up on wikipedia's common misconceptions list. For example, the "equal-transit-time explanation of aerofoil lift". I got pretty jaded about this stuff. Now I don't trust anyone's explanations unless I can understand them on a deeper level.
You're right in that what we're taught is often incomplete or misguided. Teachers are fallible. But as a child you assume their authority implies them being correct. I reckon seeing through that illusion is an important part of growing up. And to me, part of us growing up as humanity must involve not having to rely on the authority of governing bodies.
I never believed my teachers. As early as 4th grade they were treating me like a troublemaker for not following rules like their three-paragraph essay format.
I don't like the idea of having authorities on knowledge. I much prefer Montessori or Socratic teaching methods, or explorations. They're harder to do, but they produce a better understanding of the material and they allow the student to teach the teacher as well.
The teachers aren't completely "wrong", they were just conveying a simplification of history of pigments (also touched on in that article). It is after all true you can mix those colors and get a wide-range of colors (including a blacker black then you would with CMY). But any pedagogy that says there is such a thing as "primary" colors that make all colors is necessarily going to be wrong, even if its CMY.
There really is no such thing as "the" primary colors. The school primary colors are still primary as those used by a printer, and are based on historically widely used pigments. CMY (and usually K) allow for a wider gamut. But even this isn't perfect. There are printing processes with more primaries to get a wider gamut.
> And the only thing that makes these colors primary to us is that they're the colors that the cones in our eyes perceive.
This isn't quite right. For one, our cones are not monochromatic receptors, and moreover, they overlap! There isn't really just one true red, green, blue used in computer monitors either.
Because of the way our brain perceives colors (metamerism), you can create a wide gamut of colors with "alternative" primaries.
Color vision and stimulus is not a straightforward mapping of primaries triggering cones. If it were that simple you could trivially render all perceivable colors with 3 chosen primary colors. This is impossible to do.
You can mathematically define 3 primaries that cover the entire visible spectrum but they cannot physically exist (complete, but imaginary).
Any chosen set of 3 primaries is a compromise. For subtractive materials it is trickier, which is why photo inkjet printers will use up to 8 primaries.
You might find this informative:
https://web.archive.org/web/20080717034228/http://www.handpr.... The section starting with Maxwell and the "3 artist's misconceptions" especially.
But the only general definition of primary is basically just any set of colorants that can be mixed to get a useful gamut. In subtractive materials, this is why you won't see a painter messing around with mixing cyan, magenta, and yellow (better explained in the link).
Seems like flying an airplane upside down disproves that explanation pretty quickly.
So blue/green and green/yellow seem like more plausible categories than blue and green. I can see red, but green pretty much looks like bluish yellow.