Possibly because it illustrates that using assembly isn't necessarily the insane scary idea that it first seems to many (like me), even to those that should know better because they have used it in the past (like me).
No, a literal translation is what you get when you write a http server in C and inspect what assembly code it produces for x86.64. Since this assembly code is nowhere close to that output it is not a literal translation.
No, that simply means that they are semantically equivilant, which is very different to a literal translation. To quote an online dictionary , "2. Word for word; verbatim: a literal translation.". Optimisation in compilers is certainly not word for word.
Take a simple problem like the FizzBuzz problem, write it as the simple obvious branching style. Now compile it with GCC or Clang (with -O3) and you end up with lookup table (or at least I did a few months back. Semantically equivalent but not literal "word for word" translation.
You are missing that there exists many more than one literal translation for a particular C program to asm. Using your logic a compiler could not produce a literal translation of any program unless its output to 100% matched that of all other compiles for the same program.
For example, SECDED (single error-correction, double error- detection) can correct only a single-bit error within a 64-bit word. If a word contains two victims, however, SECDED cannot correct the resulting double-bit error. And for three or more victims, SECDED cannot even detect the multi-bit er- ror, leading to silent data corruption.
Technically, SECDED cannot reliably detect errors involving more then 3 bits since they might generate a valid code, they might not however and in that case they might be detected as single or double bit error or possible something else.
Yeah, ECC is going to make exploiting this reliably a lot harder - you'd need to flip three or more bits in the right combination, without first hitting a combination of bits that'd be detected as an uncorrectable error. Google's report suggests they haven't even been able to cause uncorrectable two-bit errors yet, let alone undetectable three-bit ones.
Most don't. A surprising amount do - any page that embeds a Facebook 'like' button loaded from Facebook servers with a referrer header ... or JQuery hosted by Google or a Doubleclick advert or a reTweet button, and on and on.
This is why I stay logged out of facebook and don't allow third party cookies. Trying to move off of gmail also so I am not always logged in there either. Probably not perfect but hopefully it foils a lot of attempts at tracking.
I block all scripts globally, flash heavy sites do not load and I consider that a 'feature'. Frankly, surfing the web without NoScript & AdBlock is an experience I do not want and have learned to do without. IF I need drivers or need access or ecommerce I'll allow the main site to run temporarily in Sandboxie or use tools/alt approaches to get the data without 7 or 8 ad servers running. The false promise of the internet is easily eschewed when you take a pragmatic approach to it.
Also: I was looking for something on mainstream retailer sites and was amazed at the list of servers. How many TargetImg servers does it take to load a product page? Answer: At least four, plus all the other tracking/ad loading detritus. Just wow.
That's not a tradeoff I'm comfortable with, "even" in pre-release software.
I'm not accusing you of this, but in other cases I've seen proponents of something say "This isn't a big deal, it's just starting out".
Then, when people continue to complain a year from now, the defenders say "This isn't a big deal, it's always done this."
Vivaldi looks interesting, but I'd wager that there's a decent overlap between the Power Users they're after, and the group that values their privacy the most.
Privacy has nothing to do with it. If the software is not yet ready, users are expected to encounter serious bugs. Your QA department is paid to help provide you with information to diagnose and troubleshoot those bugs, but users aren't, so you need to collect that automatically.
You're in the wrong world then, buddy. If the software is not yet ready, users are expected to encounter serious bugs. Your QA department is paid to help provide you with information to diagnose and troubleshoot those bugs, but users aren't, so you need to collect that automatically.