OT: I hope that is the case. I've been responsible for building query-like tools for end users, and all the current stuff out there completely sucks.
For example, no non-programmer I've ever talked to can correctly explain the difference between:
A and B or C
A and C or B
And to be fair, it's only because of arbitrary precedence rule choices that those are different at all.
I've personally found that dealing with groups and instead of having "AND, OR" you have "ALL, ANY" and always group rules (even if they're groups of one rule).
But even when you have that, you then have to deal with nesting rules, and nests of nests.
The actual implementation of the backend of such systems is easy, the composite pattern / delegates pretty much deals with the implementation.
But the front-end side? They tend to then be forgotten and universally suck, to the point that either it gets handed off to a developer or query-tool expert to use, or some horrific mistake such as accidentally mail-shotting everyone which causes them to never try to have automatic query rules again.
Graphical query building for the end-user is a really difficult area which hasn't seen enough research.
 I want to mail Visited Yesterday And are either Men Or Under 30. Instead of "Visit > Yesterday AND (Men OR Under 30). They forget the brackets. Whoops, that's half their clients hit.
I really enjoyed building data applications in MS Access before it was sunset. In fact, I finally started understanding SQL joins by using Access's graphical query builder so that it made me much better at my PHP web programming job. (That was in 2003-2005 time frame. I was also still in college.)
So the obvious fact that PP (along with other abortion "providers") routinely shreds tiny living human beings (dehumanized by labeling them "fetuses" or "fetal tissue") by the hundreds of thousands per year doesn't bother you in the least?
Unfortunately the word free is a little ambiguous since it can be free (as in beer) but suddenly you are charged because you make something stupid (or someone hacked your account and created a zillion services)
Which is why you enable Multi Factor Authentication beforehand and make sure the roles you assign the instances you create don't have write access except for the one bastion, which you make sure doesn't have root login or even access to the public internet except for your IP but still have access to your private servers via VPCs.
Memory savings isn't the important thing. Small strings are, after all, small. You can fit a lot of them in cache. What you're saving is the CPU cost of malloc()+free().
I'll trust that they had good performance data that made them decide to do this. Still, I'm always a bit skeptical about these clever string tricks. It's an optimization that looks amazing in microbenchmarks but has costs in the real world. That's because you're adding a (potentially hard-to-predict) branch to every access to the string.
If your application constantly allocates and deallocates lots of tiny ASCII strings, this clever representation will be fantastic for you. However, if you use a mix of string sizes, tend to keep them around for awhile, and you do a lot of manipulations on those strings, you pay the branch-misprediction cost over and over.
Apple's implementation actually adds a (potentially hard-to-predict) branch to every single Objective-C message send.
They're extremely careful about the performance of objc_msgSend, because it's so significant to the overall performance of apps. The running time for a hot path message send is down to single digit CPU cycles. Any addition to that shows up pretty loudly. I'm sure they checked to make sure the check doesn't add too much overhead in real-world use, and that the wins are worth it.
It's not really CPU intensive. To turn a 5-bit char into an 8-bit one, just do a lookup into a tiny constant array. Doesn't even introduce any new branches. Roughly the same performance as iterating over every character. (which you're probably doing anyway if you need the conversion)
The savings aren't just memory, it's probably a performance improvement too. Fewer allocations, fewer pointer indirections, and some operations (like equality checking) become O(1) instead of O(n) for these tiny strings.
The other way round (deciding whether a newly minted string of length 10 can be put in a tagged pointer) is slightly more complex. Also, for tiny strings (the only strings that this supports) on modern hardware, I'm not sure that O(n) takes much longer than that O(1).
I would think the avoidance of allocations and the pointer indirections are the big wins.
The tag bits are zero when using the pointer as a pointer, so nothing special needs to be done in order to dereference them. The tag bits are only non-zero when non-pointer data is packed into the pointer, and in that case you inherently need special handling anyway.
It does mean that every dereference needs to check for taggedness, though. For example, newer versions of objc_msgSend check the low bit at the beginning of the call before it proceeds to the normal message send path (or not, if it actually is a tagged pointer).
Do you mean because it already did the check before, and it could remember the results? Certainly it could. Unfortunately this isn't an option in Objective-C because the compiler can't make any assumptions about the tagged pointer implementation, and that means objc_msgSend has to start fresh at each invocation. In theory you could have sub-functions, like objc_msgSend_really_object and objc_msgSend_tagged_pointer that the compiler could generate calls to directly, but then you'd have to support whatever assumptions the compiler made forever.
I'm guessing/hoping that with modern branch prediction, the check at the top of objc_msgSend is a tiny penalty. I'm sure Apple measured the heck out of it, anyway.
I'm old enough to remember the days (daze?) of punched cards, when I was in high school and working at the Naval Electronic Labs on Point Loma (San Diego).
My worst experience was writing a whole compiler in IBM 360 assembler macros, and submitting it to the punch card department (probably a couple thousand cards), only to be chagrined when they wanted to know who was going to pay for this rather large job after the fact. (They kindly ate it.)
I have no idea how I got a systems programming job there for the summer, but somehow I finagled it. We were using a highly advanced new subsystem which had just appeared, TSO (time sharing option) for OS/360.
It's only an illusion when the central bank is unwilling to inject money into the system in the event of a bank run. The central bank has to maintain somewhat of an equivalence between money stored in banks and cash. It depends on if banks become merely "money vaults" or investment/speculative vehicles.
Yes, there's a whole portion of the Amazon Cloud that's run entirely for government (a family member is a higher-up at AWS Gov), and I have to assume they're also running private clouds with physical security, but I have no idea.