It is an interesting attack but is the above goal ever achievable? To protect against adversaries from the inside.
People have gotten very close to achieving similar goals.
For example, modern games consoles' anti-piracy measures guard against the device owner who has physical control and unlimited time. 
iPhone activation locks likewise prevent stolen phones from being used, even by thieves with physical control and unlimited time.
And neither of the systems rely on the clunky 'brick the device if the case is opened' methods of yesteryear.
(Of course there have also been a great many failed attempts - almost every console since the dawn of time has eventually been hacked, as have things like TPMs and TrustZone, many versions of the iPhone were rooted, etc etc)
Whereas someone with a Google cloud infrastructure hardware fault injection attack has only a tiny number of spy agencies or rogue admins as potential customers, the servers are all locked up in data centres, and anyone who got caught making an attack would get fired and/or arrested.
On the other hand, there for sure is a market for cloud based attacks, and nation states that can apply a stick to go along with the carrot of millions of dollars in "consulting fees".
Doubly true when you consider the history of Google working with the USG.
Yes. To expand: to a function on the CPU an administrator is just another user. The Operating System is responsible for managing those designations.
These trusted computing pieces across all kinds of CPUs are specifically aimed at protecting against people with host-root, so it would seem like it's a goal they've set for themselves and should be reasonably achievable.
It's not important but come on, if your field is cyber security at least make sure rogue is spelled correctly.
Just trying to figure out where you've drawn the line.
Achievable in any circumstances? No. Within a well-defined threat model, definitely.
No, safe execution of untrusted code is impossible by the very definition, not without undoing 40 years of IC design practices.
It's an almost physical limitation which makes it very hard to compute something without some electromagnetic leakage from/to the die.
Take a look on secure CPUs for credit cards. They have layer, upon layers of anti-tampering, anti-extraction measures, and yet TEM shops in China do firmware/secret extraction from them for $10k-$20k
> No, safe execution of untrusted code is impossible by the very definition
I think this is more about data processing while hiding the data from whoever operates the hardware. Homomorphic encryption could be a partial answer to that.
Explain please to me how homomorphic encryption will protect someone from basic laws of physics.
The idea is to use a special encryption scheme (and associated operations). If I take 50 numbers and multiply them by two before asking you to add them, I'll just have to divide the result by two to get the correct answer, and you won't see the data nor the result. Of course, actual schemes are more complex than that.
Also, just because something is physically possible, doesn't mean that the barriers to doing so are irrelevant. If it costs you $10k to unbrick a locked & stolen iPhone, then those countermeasures have likely succeeded at their intended purpose. This is why threat models try to quantify the time and/or monetary value of what they're protecting.
A single facility for TEM comes with $10,00,000+ pricetag, and usually they amount to few dozens per a developed country, in use in places like universities, and research institutes.
China has probably more of them than the rest of the world combined.
That the CPU should be able to cryptographically prove that a VM has been setup without any interference from an inside attacker who controls the hardware.
At the very least, SEV massively raises the barrier to such attacks. It's now beyond the ability of a rogue administrator or technician, requiring complex custom motherboards. But a well-funded inside attacker can target something with high enough value.
The end of the abstract explicitly refutes this. It is claiming that a software-only solution, using keys derived with this technique, can pretend to be a suitable target to migrate a secure VM to, which then allows the rogue admin to inspect or modify anything in the VM.
Put each CPU in, extract the keys, deploy in a regular motherboard.
As a minimum, it takes shutting down and powering down the physical machine, then starting it up, which would not go unnoticed in highly controlled environment where SEV makes most sense.
If it's an insider attack on company owner and operated hardware, there's always some reason to have a long downtime, and you can piggyback on that to attack the CPUs... Or just put it in a new system and use the migration setup.
Suggested downtimes, organic or sabotage up to attacker's timeline:
HVAC failure: have to shut down many/most/all servers to manage temperatures until HVAC techs can fix.
Automatic transfer switch failure: these things love to fail at the same time as a utility failure, and aren't always easy to bypass.
This is about protecting a VM from people who have admin rights and hardware access outside the VM.
Honestly that's kind of what I would have expected. Just making it almost impossible to get VM memory remotely by owning the hypervisor is pretty good and reduces your attack surface to people who can get into the data center and have electronics expertise.
Fundamentally, though, system security hasn't caught up with the promise of SEV. It's far more likely that a VM will be compromised by 0-day attacks than insiders at the cloud companies. But if you really need to run a secure kernel on someone else's machine then SEV is the way of the future. This includes using SEV on-premises against hardware attacks. I've wanted hardware RAM encryption for a decade or two to avoid coldboot attacks and similar hardware vulnerabilities.
On android it's already a choice between banking apps or a device you fully control. I fear that this will include all internet connected devices in the future.
The true ethics violation here is creating devices to be "sold" while retaining control over their new supposed owner. Unfortunately, the digital/software engineer's main recourse to ethical violations is to quit, and someone else will just take their place. As the digital honeymoon wears off and we become keenly aware of communications technology's authoritarian potential, I hope there is a different type of resistance forming within all of these systems of control.
Someone might be able to develop a method of causing this to occur by targeting a draw elsewhere, but this will likely by motherboard specific (or even entire platform specific).
It definitely means that SEV isn't going to save you if your vendors conspire against you, but unless your dealing with a determined state level actor, I doubt there is much risk to most of us. Internal actors (rogue staff) are likely to compromise in a simpler way.
However this is great research, I imagine in CPU designs that are being planned now, there will be some work to ensure the Secure processor is protected, likely by making the processor fault when SP input voltage drops. Alternatively they might be able to move some power regulation onto the processor package providing buffering against voltage manipulation.