Saw this recently after reviewing some Defcon 23 videos. The author goes into detail about how it's working and some other fun stuff regarding anti-reverse-engineering.
Since we are joking, assuming that the MOV instruction exists on many CPU's, could the input for this compiler be considered a needed "portable assembly language"?
Not sure I understand how that is possible. How would you implement a boolean "and" with only "mov"? If you can only move stuff around, how do you read and compare things?
This wouldn't do any noticeable damage. Modern CPUs have excellent thermal management. As far a wear goes, a hot spot in chip would in theory slightly decrease the long life span of a CPU.
If you expanded your question to hardware in the computer, then yes you can easily cause damage. BIOS’s can be flashed to make the system unbootable or overclock/stress components. Back in the bad old days of Linux, you could easily damage your monitor with the wrong xorg.conf settings.
Your question got me thinking what’s the MTBF of modern CPUs? My google-fu failed me finding any reliable source of this, but I’m sure it’s long, 10+ years.
> Back in the bad old days of Linux, you could easily damage your monitor with the wrong xorg.conf settings
You could also damage a floppy drive making it read/write, for many times, few sectors outer the common limits. Being there, done that.
But after so many discussions on online forums that it was impossible to cause physical damage using software (other than overwriting firmwares), I gave up and kept this (and the asm code) deep inside my heart.
And bringing it up still gives me chills that those discussions will return right now...
Your question got me thinking what’s the MTBF of modern CPUs? My google-fu failed me finding any reliable source of this, but I’m sure it’s long, 10+ years.
Probably decreasing, and soon not much longer than warranty period... the transistors have gotten so small that they're on the threshold of barely working even in normal operation.
As for older CPUs, they could definitely last many decades because of the lower stresses of larger process sizes, and they were designed with much higher margins.
According to the paper linked in another comment (https://news.ycombinator.com/item?id=12373015), apparently the high-k dielectric nodes used at 45nm and below show ~5x times worse NBTI ageing than non-high-k 45nm PMOS gates, which decides the tolerances that are selected to provide X years of life.
Back in the bad old days of Linux, you could easily damage your monitor with the wrong xorg.conf settings.
Back when a certain kind of line printer was commonplace (has a circulating ribbon with the typeface repeated, and n hammers in a line going across the entire width) programmers could sabotage the printer by printing the pattern on the ribbon. This would cause all of the hammers to fire at once, which the machine wasn't designed to withstand.
I've also heard of monitors being broken by having the speaker output the resonant frequency of the glass cover. However, I can't vouch for this one.
Wow, thanks, that's a fascinating paper! Direct link to pdf: [1].
So it turns out that if a transistor is kept on continuously its threshold voltage gradually increases (Negative-Bias Temperature-Instability (NBTI)), increasing the switching delay. This attack targets transistors along the critical path, increasing the path's delay until it exceeds the allowed tolerance (guardband). Turning the transistor off "heals" it; as a workaround they suggest periodically executing certain nop instructions to ensure critical path transistors spend at least 0.05% of their time turned off. They perform simulations using models of 45nm high-k PMOS transistors to produce their results. A good quote about processor reliability:
Guardbanding is the current industrial practice to cope with transistor aging and
voltage droops [Agarwal et al. 2007]. It entails slowing down the clock frequency (i.e.,
adding timing margin during design) based on the worst degradation the transistors
might experience during their lifetime. The guardbands ensure that enough current
passes through the processor to keep it above the threshold voltage and in turn ensure
that the processor functionality is intact for an average period of 5 to 7 years [Tiwari
and Torrellas 2008]. However, inserting wide guardbands degrades performance and
increases energy consumption. Hence, processor design companies usually have small
guardbands, typically 10% [Agarwal et al. 2007]. However, the MAGIC-based attack
can deteriorate the critical path by 11% and cause erroneous results in 1 month.
This also explains why overclocking a CPU may be a bad idea, although they also show that random instructions don't come close to the worst case ageing.
Well, I think this can be answered by considering that even under normal conditions there are transistors that are used as much as in your hypothetical scenario. For example, the instruction decoding logic is invoked for every instruction. Since all logic transistors are the same (afaik), I don't think that using one type of instruction would significantly reduce the life-time of your CPU.
Someone made something very clever, but it has no practical usefulness whatsoever. It is an interesting intellectual exercise. It's also very impressive that they could pull this off.
I would think it has some practical use as research in OISC (one-instruction set computing) processors, which are like the RISC model (do a smaller set of instructions so you can do them faster) on steroids.
No practical usefulness? It seems rather great for obfuscation, be it for evil (viruses) or good (license key verification -- which is deemed 'good' merely because they're legal, not because they're not a pain in the arse).
Not really. Creating an algorithm for recovering the jumps and the intent of the various MOV patterns would be no more work than it was to write this. Particularly easier because one has access to the obfuscator's code, but I don't think it would be a major hurdle even without the source code.
Same could be said for most binaries: they're just compilations (usually with open source or freely available compilers) of C/C++ code. Shouldn't be too hard to reverse once you got all the patterns worked out.
I see your point though. I'm not very experienced on this and I'm sure some patterns can easily be recovered, but until someone goes through the effort it's still a considerable effort compared to being able to read the program normally, and even when someone does it's questionable whether the original can be recovered with some simple 1:1 translation.
One instruction set computers (OISC) are more than a joke, I suppose, but I didn't dig far into theoretical computer science and can't say, what's important about them.
I read a comment the other day, that stipulated neurons would be akin to massively parallel single instruction computers.
One instruction set computers (OISC) are more than a joke, I suppose, but I didn't dig far into theoretical computer science and can't say, what's important about them.
They're for highly parallel programable SIMD number crunching. The OISC would allow for easily fabricating a whole heaping bunch of ALUs.