There are approximately 1.4 times as many iPhone tweets as Android tweets (about 1.6 million yesterday vs. about 1.1 million). It would be nice if the tweet metadata included more information about which Android model a tweet came from, but it doesn't, so all we have is the big trend.
I made the map tiles, but Tom did the UI and compositing, so I'm not sure how the layers are combined. There are more iPhone tweets than any other source, so they will tend to be the most visible if they are given equal priority.
No, it supports the article's conclusion. iPhones have far more tweets, yet are concentrated in high-income areas. Androids, with far less tweets, are nevertheless more widely and evenly spread in all areas - and dominant outside of wealthy areas.
The point of the map isn't to show income disparity, it's to show the smartphone brand of tweeters. The fact that it incidentally does show income disparity is what's interesting.
You should double-check the compositing. It looks like the iPhone layer is being drawn directly on top of the Android layer. So there is a good chance that if you reverse the composting order, the map will light up green.
The correct approach would be to decide a winner, or do some blending for every pixel on the map.
The ` and ' characters were in fact mirrored in the original ISO 646, where they were supposed to be used to overstrike accents onto other letters. The straight apostrophe comes from the ISO 8859-1 era.
There is no degree symbol because nobody ever proposed one during the standards process. Most of the punctuation came from what appeared on US typewriters at the time. Likewise pilcrow, paragraph, etc.
The first 32 characters are controls because one of the major proponents of the code was Teletype, a division of AT&T. Nobody understood what network protocols were going to turn out to look like and existing protocols were very heavy on in-band signaling. It was an attempt to eradicate the worst features of the Baudot code that was previously in use, where every character had multiple shift modes and multiple protocol interpretations.
The newline vs. carriage return thing is also an artifact of AT&T's involvement. Most US computing organizations didn't care about controls at all and wanted fixed-size records. European computing organizations wanted a single newline. The compromise in ASCII-1968 was that LF could be interpreted as CRLF if sender and receiver agreed, which became the Multics convention and thence into Unix and C.
7-bit because computers at the time universally used 6-bit characters and nobody thought the computer people would actually use the control characters, only the middle 64 characters of the code. (No lower case either.) IBM threw a wrench in the works when they went to 8-bit bytes with the 360 and others followed.
ANSI C didn't drop old-style (non-prototype) function declarations and definitions -- nor did C99 or C11. They've been officially obsolescent since 1989, but they're still fully supported by any conforming compiler.