My most radical speculation is an iPhone A-series chip with an additional low power ARM core especially to support Watch apps without burning too much "host" device battery.
The one place I've seen little-endian actually be a help is that it tends to catch "forgot to malloc strlen PLUS ONE for the terminating NUL byte" bugs that go undetected for much longer on big-endian machines. Making such an error means the NUL gets written just past the end of the malloc, which may be the first byte of the next word in the heap, which (on many implementations of malloc) holds the length of the next item in the heap, which is typically a non-huge integer. Thus, on big-endian machines, you're overwriting a zero (high order byte of a non-huge integer) with a zero, so no harm done, and the bug is masked. On little-endian machines, though, you're very likely clobbering malloc's idea of the size of the next item, and eventually it will notice that its internal data structures have been corrupted and complain. I learned this lesson after we'd been shipping crash-free FrameMaker for years on 68000 and Sparc, and then ported to the short-lived Sun386i.
Little endian is more natural for data structures. The least significant bit goes in the byte with the least address, and the most significant bit goes in the byte with the greatest address, so you never have to remember which way you're going, which is particularly nice when working with bitvectors and bit fields.
"Left" and "right" can go either way, depending on which kind of diagram you draw, even on big-endian machines, so those words always end up ambiguous. Stick to bit significance and address order and everything is unambiguous and naturally inclined to little endian.
I'm not sure what you mean by the bit shifting case. The 8 char ASCII sting compare is a neat trick with limited applicability these days.
This article covers the practical tradeoffs of little and big-endianness well: https://fgiesen.wordpress.com/2014/10/25/little-endian-vs-bi...
The tl;dr is that little-endian was a smart performance optimization in the early microprocessor days when nearly all arithmetic was effectively bignum arithmetic (because the ALUs were only 4 or 8 bits wide), but that doesn't really matter now, so we're stuck with little-endian despite big-endian having some small developer productivity benefits.
The thing is, little-endian won pretty much everywhere outside of network protocols, so almost all of the common data formats store words in little-endian format as a performance optimization. By going big-endian, you'd be both forced to eat a byte-swapping performance hit on every load or store to these formats, and you'd break a tremendous amount of software that assumes it is running on a little-endian architecture. Dealing with those headaches would absolutely not be worth the trouble for the almost insignificant benefit of slightly easier to read hex dumps, or the slightly more useful benefit of string comparison via memcmp that could be better performed by dedicated SIMD instructions anyway.
Actually, it is possible that that was nothing more than an accident. We use Arabic numerals, and Arabic languages are written right-to-left. Then there are languages like German where digits are read in reverse, so "42" is read as "two-and-forty".
The cardinal number systems for most major languages lead with larger terms (as in English). I don't think there's anything deep about this, it's probably an accident. And there are languages which lead with smaller terms, such as Malagasy (the national language of Madagascar).
The ordering of digits in Arabic is not obviously relevant, per se, since spoken English ("one hundred twenty one") matches the order of the Arabic numbers, too.
Any other dates will likely be in the same order as written. For instance, the rhyme for bonfire night is 'remember remember, the fifth of November'. I believe that many in the US also talk about the fourth of July, rather than July fourth, so it's not like English has the hard-and-fast rule you were proposing.
Technically, I'd drop the "th of" and just say/write "11 September 2001".
It took me a while to figure this out because actually it's quite rare to speak a date including a year without reading it - most spontaneously-spoken dates are this year (so the year is implied) and for a read dates, I'd probably say whatever was written.
The clincher was how I'd say my birth date, which would be of the form above.
I'm not claiming to be the definitive British English speaker, though! ;)
...and as another poster commented, it might depend on context - for example, "September 11" is often used in British English because it refers to an American event.
I think we should all take a moment to admire the francophone Swiss for boldly dropping much of the madness that is french counting. (Yes, I am looking at you, quatre-vingt-dix-neuf!)
I think it is relevant. It is possible that Western mathematics copied the Arabic notation (with right-to-left numbers), without also copying the correct way to read it (also right-to-left). For a similar situation in language, think of accents and the many different ways you can pronounce the same word.
"For a similar situation in language, think of accents and the many different ways you can pronounce the same word."
Could you be more specific as to what you mean?
For example, you can write out e as 2.7182...
However, if we were to flip this notation, ...2817.2, it isn't clear where to begin writing the number, if we read(and write) from left to write. With the regular representation, you write out the 'major' parts of the number first and then give out as many details as you want. You have the beginning of your string in mind. With a reversed system, you don't have the beginning but the end of the string in mind.
Anyway, it doesn't matter what you think is 'more natural.' Computing in binary probably feels less natural to you, but nobody is going to stop making binary computers because of that.
The german thing is just weird until you get used to it, with french I constantly go, wait, crap this is over 70, whats the deal again. I blame the wine consumption.
there's your problem, you're living on earth. try living in the cloud. :) (network byte order)
4.x > V ∀ x from 0..∞