Hacker News new | past | comments | ask | show | jobs | submit login
Intel x86 considered harmful – survey of attacks against x86 over last 10 years (invisiblethings.org)
276 points by chei0aiV on Oct 27, 2015 | hide | past | web | favorite | 169 comments

"System management mode" is a tremendous wart and should be removed wholesale, with Intel adopting a more ARM-style trusted boot chain with explicit cooperation from the OS or hypervisor. And while you're at it, kill UEFI and install a pony for me.

(Seriously, SMM serves either bizarre ILO features that high-end vendors like but are rarely used, or security agencies looking for a layer to hide in.)

SMM is used all the time:

Several Intel chipset generations require certain register writes on shutdown (disable busmaster) or they won't _actually_ shut down. Operating systems aren't aware of that. (https://github.com/coreboot/coreboot/blob/master/src/southbr...)

UEFI Secure Boot requires "authenticated variables", which can be updated by the OS (after checking authentication, using a signature scheme). UEFI code resides somewhere in memory, so the OS (or ring0 code) could opt to bypass the verification and simply rewrite those variables. The recommended (but not required) solution is to move variable update to SMM. (https://firmware.intel.com/sites/default/files/resources/A_T...)

Several hardware features are actually implemented in SMM. I've seen SMM-based handling of certain special keys (eg. "disable Wifi" button) where ACPI grabs the event, then traps into SMM using a magic IO port.

Yeah, I was going to say, I've seen hardware were advertised features were implemented with SMM. You could possibly take it away but it sure does enable a lot of nice hardware fixes without re-spinning silicon.

Some implementations it'll really screw up any RT plans you might have...

Couldn't those register writes required on shutdown be included in ACPI?

Since it's traversing the PCI bus hierarchy, not easily.

There are also a number of shortcuts for "shutdown is just two writes to a given register" that some OS probably expect to be around these days (a field in FADT) that I'm not sure how a complex ACPI shutdown routine would fare in practice.

> and simply rewrite those variables

Good luck trying that once that memory is encrypted with SGX.

SGX is too complex for such purposes, and it also doesn't provide access levels to hardware.

The alternative to hooking into UEFI code would be to just write to flash by yourself. SMM has additional permissions there.

Actually ILO is pretty useful :-)

I have an APM (ARM64) Mustang, and this takes a rather different approach, but probably not one you'll think is better. The chip advertises 8 x 64 bit cores, but there's a 9th 32 bit core which runs all the time, even when the machine is powered down (although obviously still connected to mains power). It runs a separate firmware, in its own RAM, but can access the main memory at will and invisibly to the main OS.

One way to look at this is it's brilliant that we can just put a tiny Cortex-M3 in a spare bit of silicon and have it do useful management stuff.

It runs a separate firmware, in its own RAM, but can access the main memory at will and invisibly to the main OS

All watched over by hypervisors of loving grace.

How do you know what the firmware does? Is it even possible to inspect it, let alone replace it? It's just another part of the attack surface - not necessarily deliberately, but if there are exploitable bugs in that firmware that can be triggered from the rest of the system, it's another security risk.

It's possible to update it, not sure about replacing it with ones own code. I know this is "whataboutism" but here goes: Is this different from Intel ME processors with their "hidden" Sparc core?


> Is this different from Intel ME processors with their "hidden" Sparc core?

Minor quibble: The IME is not Sun's SPARC architecture, it's ARC International's ARC, the Argonaut RISC Core, which has its origins in (of all things) the Super Nintendo's SuperFX chip.

Didn't even know they had ARC processors in them. That's a trip.

Is this what I've got in my Lenovo X1 with vPro? The Ctrl+P shortcut to get into the config at boot doesn't work - can I poke at it any other way?

In case of high end Texas Instrument's ARM MCUs + Linux, M3 is used for power management. Though the firmware is provided as a binary blob and there's no way to control or check what it actually does (as far as I know).

"but there's a 9th 32 bit core which runs all the time, even when the machine is powered down"

So it's like the situation on mobile phones, with their baseband processors. Except on a general purpose computer.


I think x86 chips with Intel Active Management has the 'Management Engine' which runs an ARC processor with a ThreadX RTOS on it. It even has it's own network interface separate from the rest of the CPU. As far as I can tell this is for Enterprise users who need to manage the PC's and Rack computers remotely, even when the OS dies.

Most motherboard vendors also thrown stuff onto enterprise motherboards for doing things remotely. They can have issues: https://www.youtube.com/watch?v=GZeUntdObCA

Terrifying! Is there a way to disable that?

I don't know which chip OP is using but no you can't. It is usually a small CPU which is part of the GPU video decoder that is used as the 'boot' processor. It usually executes first level ROM code and fetches the first stage boot loader from flash, USB, etc.

It can also do PMU control when the machine is 'turned off'. The alternative is to use an external microcontroller. It is actually quiet useful.

What is your reason for wanting to disable it?

It is another "DMA hacking" vector, one that is always on. https://en.wikipedia.org/wiki/DMA_attack

I have to disagree.

Those core execute the codes from their local SRAM which can only be written to under very specific conditions. You can't arbitrarily write to their SRAMs.

An SOC has various bus arbitrators that are built into hardware which control the dataflow. It is part of the chip's backbone. I've never seen an architecture in which you could easily write to the aux core's SRAM. This is partially because those cores are often responsible for DRM therefore access to them is very restricted, but also because it is expensive (in terms of gates) and unnecessary to hang them off the main bus.

It is a very unlikely "DMA hacking" vector.

The peripheral can just as easily DMA to main memory and overwrite kernel code if the memory apertures are set wide open to allow peripheral to DMA anything into the host. Additionally unless you have PCIe or a similar bus with mastering capability a peripheral can't DMA.

It's basically a fundamental part of the SoC, so I doubt it could be disabled.

That's more or less what the Management Engine (Intel) and Platform Security Processor (AMD) are. In the latter case, it's also some smaller ARM core.

ARM is doing all the same things intel has. EL3/Secure mode is basically intel SMM. For every "feature" intel has there is a similar version for ARM64.

"kill UEFI"

Sounds good to me. I see UEFI as an added, redundant, poor quality OS. I'm a connoisseur of bootloaders and live in a TTY so UEFI is another command line that I do not need. You said it best: its features are "rarely used"; it just provides unwanted third parties with another "layer to hide in".

I would prefer a pony over UEFI... possibly Rainbow Dash.

> UEFI stands for "Unified Extensible Firmware Interface", where "Firmware" is an ancient African word meaning "Why do something right when you can do it so wrong that children will weep and brave adults will cower before you", and "UEI" is Celtic for "We missed DOS so we burned it into your ROMs".

(From the excellent https://lkml.org/lkml/2011/5/25/228)

Probably worth pointing out that the author is the project lead of Qubes, one of the very few promising projects in the vast wasteland of computer security.

Link for the lazy: https://www.qubes-os.org/

Very few? Seriously?

Seriously. The vast majority of computer security effort is wasted on things like the advisory-and-patch cycle, pen testing, and virus scanning, which can never, by their very nature, provide computer security. That's not to say you don't have to do them — it's just that they're not productive.

The game console guys have their act together. Well, Microsoft, anyway. And Apple is doing a great job on the mobile OS side of things. There's also some very interesting hypervisor work coming out of the Windows 10 group.

So "most" is probably okay. with a couple noticeable exceptions:

Android needs to get its shit together. Not letting any old manufacturer write device drivers with jaw-droppingly bad security holes would be a start. I last looked at vendor-provided drivers in 2010 or so and I very much doubt they have improved.

(A while ago I wanted to store a secret on an Android device. And I couldn't do it. Ten year old platform and no effective secure storage; did the ghost of J Edgar Hoover visit Google and threaten them?)

Network equipment manufacturers: Why even bother with a home router when some code monkey stuck a hard-coded password into the firmware? I'd love to be able to inspect the code on the device I'm trusting to keep my network safe. Interesting that DDWRT is under political attack, isn't it?

Oh, I understand now and I agree wholeheartedly.

There's been some exciting progress in the formal verification department in recent years, though.

I agree!

I totally agree with him that the vast majority of INFOSEC products are a waste. Just take any of them and compare them to the risks in my enumeration. Also, note what pre-requisites for security their development processes have vs that list. You'll find the most projects are to secure computing what night is to day. ;)


I critiqued QubesOS in the past over re-inventing the wheel and on a highly insecure platform. Her recent write-up supports my critique more than ever. Regardless, they're at least doing something with provable benefit and high usability on a platform with proven benefit, both of which can be further secured or extended by others. An exception to the rule of mainstream INFOSEC where the sense of security is almost entirely false as no effort is taken to address TCB.

The only project in this space leveraging best practices in TCB or architecture is GenodeOS. They're doing what I suggested QubesOS do a while back: build on all proven, low-TCB techniques in academia. Main critique I had of them is they're too flexible and need to focus on a single stack long enough to get it working solidly like Qubes team did. They stay building on and integrating the better stuff out of L4 family of security engineering research, though.

Yeah man the only good things in CS are Postgres and common lisp. Everything else is a waste of time.

Try crash-safe.org or Cambrige's CHERI for hardware; Qflow w/ YoSys for OSS synthesis; Microsoft's VerveOS for OS correctness; Racket for LISP; Google's F1 RDBMS for databases; Ur/Web for web apps; Cornell's JIF/SIF/SWIFT/Fabric for distributed apps; Coqasm for assembler; CompCert and CakeML for compilers/tooling.

That's just a tiny selection from my collection. Lots of exciting things going on for secure and correct tools that are still powerful. Postgres and Common LISP are both weak and boring in comparison despite being good tools. :P

I don't know if I understand how most of these things could be considered secure unless they've been heavily attacked already.

Are they all so much more secure by design that you consider them to be great projects?

My experience is heavily with server-side web languages, so I'm particularly skeptical of those. Even the most secure-seeming web languages have buggy, insecure implementations at first.

Those projects are mostly research into how to make software and hardware less buggy; most of them are not themselves written with a threat model in mind.

Exactly. Most could be, though, if people put forth the effort. So I keep mentioning such work.

Note: This comment is mainly for others reading along. Something I do on forums. I know you already understand this point.

I think he means there are few operating systems out there that make security the primary goal. Most other options seem to think in terms of "how can we best secure the platform we already have and is used by millions of people without breaking anything".

When that's what you're working with, you're limiting yourself quite a bit in terms of adding new security solutions. At best you'll be at least a decade behind the innovators in security who aren't afraid to build new stuff from scratch and break the old stuff.

No, that's not what I mean, and that's not what Qubes is.

Making security the primary goal of your operating system would be nearly as perverse as making swapping the primary goal of your operating system. The primary reason security seems special here is that we do have working swapping in our operating systems, but we don't have working security.

Nevertheless, if you try to add virtual memory to an operating system that was designed without knowledge of how such a thing could work (like nearly all 1960s operating systems) it is going to be pretty rough going! Today, security is where virtual memory was 50 years ago.

Qubes is interesting especially because it doesn't break compatibility with everything else.

That's what any VMM-style solution does, though. They use basic isolation and controlled sharing while letting problems remain in every other component. Nothing special there and doesn't address many security threats that hit other parts.

I mainly consider solutions like Qubes to be for preventing accidental leaks, containing damage from regular malware, and making recovery easier. Much like the Compartmented Mode Workstations and MILS virtualization that came before it.

Real, more-thorough security will break compatibility or take a huge performance/functionality hit. Was true in any system designed to high assurance or surviving NSA pentesting. Will be more true for whatever supports legacy applications on today's more complex, leaky ISA's/API's. CheriBSD is closest thing to an exception but I don't even trust its monolithic parts due to how attacks can jump around in a system. Nizza Architecture on non-Intel, security-enhanced processors is best model for now given we can at least isolate security-critical apps into their own partitions on tiny TCB's. No mature FOSS implements that, though.

So, regular malware defence with Qubes, etc and energy-gapped systems + KVM's + guards for high-strength attacker defence remain the options.

The book referred to by the article -- `Platform Embedded Security Technology Revealed` -- appears to be available for download at no cost right now[1]. Pricing error or not, I've just completed checkout without issue.

For completeness, I have no affiliation or correction with Apress -- please consider this a heads-up.

[1] http://www.apress.com/9781430265719

It's free for months, so probably not a mistake.

So I read the blog post and skimmed the PDF and I'm left with some questions. IF these security issues have been present for 10 years, but there hasn't been any widespread malicious action on them, are they really issues?

To create an analogy, my car doesn't have bullet proof glass, someone could easily shoot it up and i'd be dead. But nobody really goes around shooting up cars, so is it an issue?

The problem is that if you're trying to build a secure computing environment (like Joanna is with Qubes OS), you run into limitations all the time.

Those platform issues may not be a problem for Jane Doe on Windows 10, but when users decide that they need more security than that (and Qubes points in the right direction, although there's still some miles to go) they may have a reason (or just paranoia).

In either case, they won't be very happy with the sad state that is x86 "security" because there are way too many places where an undue trust into Intel is implied.

Eg. the SGX feature, which can run userland code in a way that even the kernel (or SMM) can't read it: The keys are likely mediated by the Management Engine (ME) - which also comes with network access and a huge operating system (for the purposes of an embedded system: the smallest version is 2MB) that you, the user, can't get rid of.

So who's SGX protecting you from if you fear involvement by nation state actors? x86 isn't for you in that case (Intel's version in particular, but pretty much all alternatives are just as bad) - and that's what this paper points out.

Intel describe[1] SGX as a feature designed to "enable software vendors to deliver trusted[2] applications", where applications would "maintain confidentiality even when an attacker has physical control of the platform and can conduct direct attacks on memory".

This already suggests the owner of the CPU isn't who they are protecting, but it gets worse (even before we consider the risk from AMT). Starting an SGX enclave seems to require[3] a "launch key" that is only known by Intel, allowing Intel to control what software is allowed to be protected by SGX.

[1] https://software.intel.com/en-us/blogs/2013/09/26/protecting...

[2] Before the term "DRM" was coined, the same crap used to be called "trusted computing" (back when Microsoft was pushing Palladium/NGSCB)

[3] https://jbeekman.nl/blog/2015/10/intel-has-full-control-over...

This kind of feature would be amazing for security if it wasn't going to be immediately abused with DRM encumbered vendors, MS, and vague yet menacing government agencies trying to lock users out of their own devices.

If I could provide all the keys my machine could be completely locked down and damn near impossible to break into even with complete physical access and a ECE degree.

One thing we would actually want, here, though, is a setup where you can rent out your computer (i.e. as an IaaS provider), without being capable of monitoring the renter. In that kind of setup, the tenant does not want you to own "all the keys to your machine"—or, at least, they want to have some way to verify that you have disabled/discarded those keys.

I don't see the point of this. Either you trust your cloud provider, or you don't put it in the cloud. You could think of a technical solution to prevent monitoring, but how can you ever be sure that your provider has actually implemented it? Plus, I don't think providers would want something like this; if there's something gravely illegal going on, you want to be able to know and ban that user from your service.

> One thing we would actually want, here, though, is a setup where you can rent out your computer (i.e. as an IaaS provider), without being capable of monitoring the renter.

That would require all hardware to be secure against all attackers. As soon as one attacker breaks one hardware model, they can start extracting and selling private keys that allow anyone to emulate that piece of hardware in software.

I'm also having a hard time seeing the use case. What kind of thing has hard secrecy requirements but demands so much hardware that you can't justify owning it?

Exactly, cloud computing is a potentially much more important market for.SGX than DRM. Even though Intel could no doubt handover machine keys to any government agency on request without you knowing, it potentially protects you against e.g. malicious admins at a cloud provider. There has been some really interesting research recently on running applications in an sgx enclave where the OS itself runs outside the enclave and is completely untrusted (see e.g. the Haven paper from Microsoft Research at OSDI last year, it's extremely cool).

Bingo! You could implement, for instance, a verifiable, safe Bitcoin mixer with it. (I pick this as a nice example, because it's something that is in demand (for better or worse) and is impossible to do at the moment.)

immediately abused for DRM? If you look (somewhat) closely, it's hard to avoid the impression that this stuff was designed and built around DRM use cases.

The "Protected A/V Path" could be a neat feature for high security computers (consider the GPU driver, a horrible, buggy, complex piece of software, being unable to grab pixels of a security-labelled window) - but that's not what this was built for. SGX, the same.

Non-DRM use cases seem to be an afterthought, if possible at all (typically not).

Of course they are. We ran the Internet on C code that was positively riddled with trivially exploitable stack overflows for 7 years after the Morris Worm demonstrated RCE through overflows --- 6 years after the "microscope and tweezers" paper explained how the attack worked.

Exact same story with error oracle attacks in cryptography.

Attackers go after the low hanging fruit first, and then they move up the tree.

Well that was kind of my point, that hardware is so far up the security tree, it's almost moot (that's kind of my question I guess. Is it far enough up that tree to be moot?). To compare with my analogy, a hitman doesn't need to shoot me up while I'm driving my car, he can wait until I've exited the vehicle and negated any protection I might have had. Similarly, a hacker can avoid the hardware entirely and wait by a printer to read those secure financial documents. Or they can watch over your shoulder while you type your password. Etc. Etc.

It's the 'Holy Grail' of exploitation though - if you can back-door the hardware as she's suggested in the paper, nothing in the software stack can detect it, which means you cannot know if your machine is secure or not.

The fact it's very hard to achieve means it's not something that's likely, but if a government decides that it wants to commandeer your computing hardware, there's nothing you could do to stop them, plus you'd never know that it occurred.

Computer platform security is not like physical security. Once you write the software to accomplish a platform attack, it's usually about as simple to execute it as it would be to execute a simpler attack. The complexity is in the software, not the attack execution.

Hardware weaknesses are being exploited right now by High Strength Attackers in intelligence services and stealthy contractors. The TAO supports this. Additionally, there were even malware in the past that used CPU errata for obfuscation. So, we can't ignore this.

On top of it, there's dozens of designs in academia and even less risky options in industry that counter most of this stuff with various tradeoffs. So, anyone that wants to build something better has quite the options. The problems are literally there for backwards compatibility and avoiding costs. Far as I can tell.

25 years, SMM was born on 386SL in 1990

but it gets worse, every processor from PPro (1995) on to sandy bridge has a gaping security hole reported (conveniently only AFTER Intel patched it 2 generations ago) by a guy working for Battelle Memorial Institute, known CIA front and black budget sink


surprisingly good writeup: http://www.theregister.co.uk/2015/08/11/memory_hole_roots_in...

list of CIA fronts: http://www.jar2.com/2/Intel/CIA/CIA%20Fronts.htm battelle is on it

The short answer is that there is a plethora of software level issues that are much easier to exploit, so people don't bother with hardware bugs.

Does this mean we should stop worrying about hardware bugs? I don't know the answer to this question. A principal engineer in the group that does Intel's hardware security validating and pentesting told me that they felt their job was to maintain the status quo of hardware bugs being harder to exploit than software bugs. More security than this is probably not justified from a risk vs cost analysis perspective; while less security than will probably break a lot of assumptions that people designing software make.

I think a more fitting analogy would be:

My car has a software vulnerability that would allow somebody clever to take control of the steering remotely while I drive, but nobody really goes around remote controlling other people's cars, so is it an issue?

Depends, are there people that might try to shoot you specifically, or does non-bullet-proof glass have weaknesses against other things that might happen more commonly?

(= just because something isn't in widespread use yet/maybe hard to do doesn't mean it isn't used in targeted attacks. Or might become widespread after new discoveries or in combination with other vectors. And a lot of her work (e.g. Qubes OS) aims at making things secure on a very low level)

Also, some of these features are marketed and sold to us as additional protections, and I think it is important to see if they can actually do what they promise or if they just add complications, especially if they inconvenience users.

An interesting point I read awhile ago (wish I could find the article) is that variable-length instruction sets (like x86) are preferred by authors of malicious software over fixed length sets because the binaries are harder to analyze. That is because in variable length ISAs, you must use a recursive decent parser to find all code paths in the program, because jump targets are specified in bytes or words instead of discrete instructions. This allows someone to jump into what might be the data portion of an instruction when parsed one way, and now the behavior totally changes because the bytes are being interpreted another way.

Are you talking about "The Geometry of Innocent Flesh on the Bone: Return-into-libc without Function Calls (on the x86)"(https://cseweb.ucsd.edu/~hovav/dist/geometry.pdf)? Great paper with a great title.

I wasn't, I'm pretty sure it was a blog post not an academic paper. Regardless, this paper looks awesome thanks for the link :D

Yes, but this property is probably more regularly used for ROP attacks.

This has bitten me in CTFs before - anyone aware of any disassembler tools that help with this problem?

Hmm, my favorite vulnerability is x86's lack of self-synchronization meaning that the same byte-stream could be two different streams of valid instructions depending on where you start reading.


This technique is also used to render W^X useless.

There just have to be backdoors built into the Intel Management Engine. Intel won't disclose what code it executes, so we have to assume there's a backdoor. The question is, whose backdoor.

It would be useful to install some honeypot machines which would appear to be interesting to governments (an ISIS bulletin board, for example) and record every packet going in and out.

This is why I laugh about people here that laugh about backdoors in their TRNG, etc. Intel's been backdoored for AMT, etc for a while. Those circuits, due to NRE costs, have to be in most of their chips whether they advertise them or not. They have deep read access into everything in the system with who knows what write access. We also know some of their chipsets have radios in them which might be in the others, permanently or temporarily disabled.

Just a huge black box of interconnected black boxes at least one set of which is definitely a backdoor. And worst thing is I heard it can work when the machine is entirely or somewhat powered down. (!) I don't know for sure because I won't buy one lol. The old stuff less likely to have those features works fine for me with my builds.

Gaisler's stuff and RISC-V are best hope as they're both open hardware plus getting fast. Gaisler's are already quad-core with as much I.P. as people could ever use. Anyone wanting trustworthy hardware knows where to start on building it. CheriBSD on CHERI capability processor is also open-source and can run on a high-end FPGA. So, there's that for use or copying in a Gaisler modification.

> Gaisler's stuff and RISC-V are best hope as they're both open hardware plus getting fast. Gaisler's are already quad-core with as much I.P. as people could ever use. Anyone wanting trustworthy hardware knows where to start on building it. CheriBSD on CHERI capability processor is also open-source and can run on a high-end FPGA. So, there's that for use or copying in a Gaisler modification.

How can you trust the FPGA? Or the very closed-source bitstream generator necessary to compile the VHDL/Verilog code?

Assuming you want to manufacture secure processors from these designs, how can you trust the chip fab?

I'm genuinely interested, as I'm not aware of any research into protection from these issues.

You have several ways to deal with trust issues in hardware:

1. Monitor hardware itself for bad behavior.

2. Monitor and restrict I/O to catch any leaks or evidence of attacks.

3. Use triple, diverse redundancy with voter algorithms for given HW chip and function.

4. Use a bunch of different ones while obfuscating what you're using.

5. Use a trusted process to make the FPGA, ASIC, or both.

I've mainly used No's 2-4 with No 5 being the endgame. I have a method for No 5 but can't publish it. Suffice it to say that almost all strategies involve obfuscation and shellgames where publishing it gives enemies an edge. kerckhoff's principle is wrong against nation-states: obfuscated and diversified combination of proven methods is best security strategy. Now, ASIC development is so difficult and cutting edge that knowing that the processes themselves aren't being subverted is likely impossible.

So, my [unimplemented] strategy focuses on the process, people, and key steps. I can at least give an outline as the core requirements are worth peer review and others' own innovations. We'd all benefit.

1. You must protect your end of the ASIC development.

1-1. Trusted people who won't screw you and with auditing that lets each potentially catch others' schemes.

1-2. Trusted computers that haven't been compromised in software or physically.

1-3. Endpoint protection and energy gapping of those systems to protect I.P. inside with something like data diodes used to release files for fabs.

1-4. Way to ensure EDA tools haven't been subverted in general or at least for you specifically.

2. CRITICAL and feasible. Protect the hand-off of your design details to the mask-making company.

3. Protect the process for making the masks.

3-1. Ensure, as in (1), security of their computers, tools, and processes.

3-2. Their interfaces should be done in such a way that they always do similar things for similar types of chips with same interfaces. Doing it differently signals caution or alarm.

3-3. The physical handling of the mask should be how they always do it and/or automated where possible. Same principle as 3-2.

3-4. Mask production company's ownership and location should be in a country with low corruption that can't compel secret backdoors.

4. Protect the transfer of the mask to the fab.

5. Protect the fab process, at least one set of production units, the same way as (3). Same security principles.

6. Protect the hand-off to the packaging companies.

7. Protect the packaging process. Same security principles as (3).

8. Protect the shipment to your customers.

9. Some of the above apply to PCB design, integration, testing, and shipment.

So, there you have it. It's a bit easier than some people think in some ways. You don't need to own a fab really. However, you do have to understand how mask making and fabbing are used, be able to observe that, have some control over how tooling/software are done, and so on. Plenty of parties and money involved in this. It will add cost to any project doing it which means few will (competitiveness).

I mainly see it as something funded by governments or private parties for increased assurance of sales to government and security-critical sectors. It will almost have to be subsidized by governments or private parties. My hardware guru cleverly suggested that a bunch of smaller governments (eg G-88) might do it as a differentiator and for their own use. Pool their resources.

It's a large undertaking regardless. Far as specifics, I have a model for that and I know one other high-assurance engineer with one. Most people just do clever obfuscation tricks in their designs to detect modifications or brick the system upon their use with optional R.E. of samples. I don't know those tricks and it's too cat n mouse for me. I'm focused at fixing it at the source.

EDIT: I also did another essay tonight on cost of hardware engineering and ways to get it down for OSS hardware. In case you're interested:


I guess I still don't follow. Allow me to better specify the threat model I have in mind:

Consumer wants one computer system that he trusts. Consumer should be able to get one without having to trust any of the manufacturers or integrators. They should not be able to subvert the security of the system, assuming the published code and specs contain no errors. There should be no black boxes to trust.

Design team wants to make and provide open hardware. They want to service Consumer, and they want to do it in a way that Consumer does not need to trust any blackbox processes.

How does this happen? Note that I'm not asking about keeping the VHDL code secure, how to physically secure the shipment to the fab company, etc. I'm asking how Consumer, who gets one IC, can verify that the IC matches exactly with the published VHDL code and contains no backdoors.

It seems you mainly focus on how the design team can minimise the chances of subversion. That's a much lower bar and not really sufficient in my mind. There's still too many places to subvert, and the end consumer still needs to trust his vendor, which is the same situation we have today.

The bit about multiple independent implementations with voting (NASA-style) sounds extremely expensive and inefficient, but also very interesting for high-security systems. Are you aware of any projects implementing it for a general-purpose computer, specifically to prevent hardware backdooring (as opposed to for reliability)?

UPDATE: To clarify, as wording is important in these kinds of discussions: When something is described as 'trusted', that's a negative to me, as a 'trusted' component by definition can break the security of the system. We need a way to do this without 'trusted' components. So when you say 'Use a trusted process to make the FPGA, ASIC, or both.', that sounds like exactly what we have today - the consumer gets a black box, and no way to verify that it does what it's claimed to do. The black box must be 'trusted' because there's no other way. Me knowing that the UPS shipment containing the mask had an armed guard does not make me more likely to want to trust the chip.

"Design team wants to make and provide open hardware. They want to service Consumer, and they want to do it in a way that Consumer does not need to trust any blackbox processes. How does this happen?"

That was covered here: " I have a method for No 5 but can't publish it. Suffice it to say that almost all strategies involve obfuscation and shellgames where publishing it gives enemies an edge."

There are black box processes trusted and checked in my best scheme, though, with security ranging from probabilistic to strong with some risks. Mainstream research [1] has a few components of mine. They're getting closer. DARPA is funding research right now into trying to solve the problem without trust in masks or fabs. We're not there yet. Further, the circuits are too small to see with a microscope, the equipment is too expensive, things like optimal proximity correction algorithms too secret, properties of fabs too varying, and too little demand to bring this down to so just anyone can do it and openly. Plus, even tooling itself is black boxes of black boxes out of sheer necessity due to esoteric nature, constant innovation, competition, and patents on key tech.

Note: Seeing chip teardowns at 500nm-1um did make me come up with one method. I noted they could take pictures of circuits with a microscope. So, I figured circuit creators could create, distribute, and sign a reference image for what that should look like. The user could decap and photo some subset of their chips. They could use some kind of software to compare the two. If enough did this, a chip modification would be unlikely except as a denial-of-service attack. Alas, you stop being able to use visual methods around 250nm and it only gets harder [2] from there.

Very relevant is this statement by a hardware guru that inspired my methods which embrace and secure black boxes instead of go for white boxes:

"To understand what is possible with a modern fab you'll need to understand cutting edge Lithography, Advanced directional etching , Organic Chemistry and Physics that's not even nearly mature enough to be printed in any text book. These skills are all combined to repeatedly create structures at 1/10th the wavelength of the light being used. Go back just 10 or 15 years and you'll find any number of real experts (with appropriate Phd qualifications) that were willing to publicly tell you just how impossible the task of creating 20nm structures was, yet here we are!

Not sure why you believe that owning the fab will suddenly give you these extremely rear technical skills. If you dont have the skills, and I mean really have the skills (really be someone that knows the subject and is capable of leading edge innovation) then you must accept everything that your technologists tell you, even when they're intentionally lying. I cant see why this is any better then simply trusting someone else to properly run their fab and not intentionally subvert the chip creation process.

In the end it all comes down to human and organizational trust, "

Very well said. Still an argument for securing machines they use or transportation of design/masks/chips. The critical processes, though, will boil down to you believing someone that claims expertise and to have your interests at heart. I'm not sure I've even seen someone fully understand an electron microscope down to every wire. I'll assure you the stuff in any fabrication process, from masks to packaged IC's, are much more complex. Hence, my framework of looking at it.

"how to physically secure the shipment to the fab company, etc. I'm asking how Consumer, who gets one IC, can verify that the IC matches exactly with the published VHDL code and contains no backdoors."

Now, for your other question, you'd have to arrange that with the fabs or mask makers. Probably cost extra. I'm not sure as I don't use the trusted foundry model [yet]. My interim solution is a combination of tricks that don't strictly require that but are mostly obfuscation. You'd need guards you can trust who can do good OPSEC and it can never leave your sight at customs. You still have to trust mask maker, fab, and packager. That's the big unknown, though, ain't it? The good news is that most of them have a profit incentive to crank out product fast in a hurry at lowest cost while minimizing any risks that hurt business. If they aren't attacking or cooperating, it's probably for that reason.

"how to physically secure the shipment to the fab company, etc. I'm asking how Consumer, who gets one IC, can verify that the IC matches exactly with the published VHDL code and contains no backdoors."

That's semi-true. Re-read my model. The same one can protect the consumer with minor tweaks. That's because my model maps to the whole lifecycle of ASIC design and production. One thing people can do is periodically have a company like ChipWorks tear it down to compare it to published functionality. For patents and security, people will do that already if it's a successful product. So, like Orange Book taught me long ago, I'm actually securing the overall process plus what I can of its deliverables. So long as process stays in check, it naturally avoids all kinds of subversions and flaws. High assurance design and evaluation by independent parties with skill do the rest.

"The bit about multiple independent implementations with voting (NASA-style) sounds extremely expensive and inefficient, but also very interesting for high-security systems. Are you aware of any projects implementing it for a general-purpose computer, specifically to prevent hardware backdooring (as opposed to for reliability)?"

It's not extremely expensive: many embedded systems do it. Just takes extra hardware, an interconnect, and maybe one chip (COTS or custom) for the voting logic. These can all be embedded. Those of us doing it for security all did it custom on a per-project basis: no reference implementation that I know of. There's plenty of reference implementations for the basic scheme under phrases triple modular redundancy, lockstep, voting-based protocols, recovery-oriented computing, etc. Look up those.

You can do the voting or error detection as real-time I/O steps, transactions, whatever. You can use whole systems, embedded boards, microcontrollers, FPGA's, and so on. The smaller and cheaper stuff has less functionality with lower odds of subversion or weaknesses. Helps to use ISA's and interfaces with a ton of suppliers for diversity and obfuscation part. If your targeted, don't order with your name, address, or general location. A few examples of fault-tolerant architectures follow. You're just modifying them to do security checks and preserve invariants instead of mere safety checks, although safety tricks often help given the overlap.

App-layer, real-time embedded http://www.montenegros.de/sergio/public/SIES08v5.pdf

Onboard an ASIC in VHDL http://www.ijaet.org/media/Design-and-analysis-of-fault-tole...

FPGA scheme http://crc.stanford.edu/crc_papers/yuthesis.pdf

A survey of "intrusion-tolerant architectures" which give insight http://jcse.kiise.org/files/V7N4-04.pdf

"To clarify, as wording is important in these kinds of discussions: When something is described as 'trusted', that's a negative to me, as a 'trusted' component by definition can break the security of the system."

Oops. I resist grammar nazi's but appreciate people catching wording that really affects understanding. That example is a mistake I intentionally try to avoid in most writing. I meant "trustworthy" and "trusted" combined. You can't avoid trusted people or processes in these things. The real goal should be to minimize amount of trust necessary while increasing assurance in what you trust. Same as for system design.

"Me knowing that the UPS shipment containing the mask had an armed guard does not make me more likely to want to trust the chip."

Sorry to tell you that it's not going to get better for you outside making sacrifices of above-style schemes which are only probabilistic and with singificant unknowns in the probabilities. Tool makers, fabs, and packaging must be semi-trusted in all schemes I can think of. They must be turned into circuitry at some point. Best mix is putting detection, voting, or something critical on an older node or custom wiring. What you can vet by eye if necessary. Can still do a lot with 350nm. Many high assurance engineers use older hardware with hand-designed software between modern systems due to subversion risk. I have a survey [3] of that stuff, too. :)

Note: My hardware guru did have a suggestion I keep reconsidering. He said most advanced nodes are so difficult [4] to use that they barely function at all. Plus, mods of an unknown design at mask or wiring level are unlikely to work except most simplistic cases. I mean, they spend millions verifying circuits they understand, so arbitrary modifications to black boxes should be difficult. His advice, though expensive, was to use most cutting-edge node in existence while protecting transfer of design and the chips themselves. Idea being that subversion of ASIC itself would fail or not even be tried due to difficulty. I like it more I think about it.

[1] https://www.cs.virginia.edu/~evans/talks/dssg.pptx

[2] https://www.iacr.org/archive/ches2009/57470361/57470361.pdf

[3] https://www.schneier.com/blog/archives/2013/09/surreptitious...

[4] http://electronicdesign.com/digital-ics/understanding-28-nm-...

" "Considered Harmful" Article Titles Considered Derivative and Uncreative"

Best enumeration of x86 security problems I've seen so far. Solid argument to avoid Intel in security-critical products where possible. :)

The title's dumb but the paper has some good info in it.

To paraphrase Stroustrup:

There are two kinds of systems: the ones that have security holes and the ones that people don't use.

Yes yes, everyone should move to MCST Elbrus :D

No, use Gaisler's stuff:


Also SPARC but with plenty GPL. Has a quad-core, too, with all of them designed to be easily modified and re-synthesized. :)

There are a few of these (open architectures) - but does anyone know how much (ballpark) it'd cost to make something like the Raspberry Pi 2 (ie: a full SoC, with gig ethernet, usb, hdmi, sata) support? Say 10.000 units?

I'm assuming it'd be expensive, as it doesn't appear anyone's doing it...

Several million dollars. Everything is extremely expensive - IP licenses, ASIC layout software licenses, simulation and verification software and possibly hardware, mask costs, line setup costs, wafer production costs, packaging costs, testing costs, etc.

If you're in ASIC industry, look at my reply and I'd appreciate your thoughts on my cost-reduction tactics:


Ip licences for gpled cpu cores and schemas? Several million doesn't sound that bad. Means that the bar moves to 100k rather than 10k units (if the goal is to break even in the short term). And it's tricky to sell 20k units/year for five years, as the cost to upgrade (clock, ram) would probably be in the same ballbark aa initial investment.?

IP licenses for things like analog clock management components/PLLs, analog Ethernet PHYs, analog serializers and deserializers for HDMI, SATA, USB 3, etc. These are all mixed-signal components. I am not aware of any open source designs for any of these for modern ASIC targets. Most open source designs target FPGAs which already have these components built onto the FPGA itself (i.e. the open source design uses the module as a 'black box'). These will probably come in GDSII form (actual layout, not a schematic, RTL, etc.) for a specific process with a specific foundry. If you want to design those yourself, then you would have to get additional licenses for analog design and simulation suites. And you might have to re-spin a couple of times (with millions of $ in mask costs) on each targeted process technology to get the kinks worked out.

Thank you for clarifying. Basically I thought maybe something like:


already existed - but apparently not (except for targeting FPGAs as you mention) ?

He addressed your point when he said most of them target FPGA's and often leverage what's already on them. I'll add that the quality, documentation, and so on at opencores.org seems questionable given all the complaints I read from pro's and amateurs alike. Some are good but I'm not qualified to say past what was ASIC-proven.

The analog stuff he mentioned is really tricky on any advanced node. Everything is difficult at least. It all needs good tooling that's had around a billion a year in R&D (Big Three) going back over a decade to get to the point they are. OSS tooling is getting better, esp for FPGA's. However, open-source ASIC's are going to happen with open source development model. Like many great things, they'll be built by teams of pro's and then open-sourceD. Gotta motivate them to do that. Hence, my development models in the other post.

Right. Which is of course why we have stuff like the NASA/ESA making and releasing designs - big government projects with highly skilled staff. But they don't have much interest in releasing a "personal computer" or a "smart phone" (I'm sure they'd love to have an open hw platform to use for smart phones and tablets - or work stations and super computers, just that it's not high up on the list of priorities in the "millions of dollars" budget lists).

[ed: I'm thinking of things like LEON etc - but as mentioned, and as I understand it, for the ASIC case, maybe not the whole eval board is open. And it's not really in the same ballpark as the dual/quad multi-GHz cpus we've come to expect from low-end hard-ware:

http://www.gaisler.com/index.php/products/boards/gr-cpci-leo... ]

Oh, let me be clear that any starting point will definitely have more work to do and will never be in ballpark as top Intel/AMD/IBM CPU's. The reason is that they use large teams of pro's with the best tools often doing full-custom HW development. Full-custom means they'll do plenty to improve HDL, RTL, and even wiring of gates they use. Think of Standard Cell as Java web applications with full custom being like delivering a whole platform with a board, firmware, assembler, OS components, and native applications. That's maybe illustrative of the differences in skills and complexity.

Example of custom design flow http://viplab.cs.nctu.edu.tw/course/VLSI_SOC2009_Fall/VLSI_L...

Note: Load up this right next to the simple, 90nm MCU PDF I gave you and compare the two. I think that you'll easily see the difference in complexity. One you'll be able to mostly follow just googling terms and understand a lot of what they're doing. You're not going to understand the specifics of the full-custom flow at all. Simply too much domain knowledge built into it that combines years of analog and digital design knowledge. Top CPU's hit their benchmarks using full-custom for pipelines, caches, etc.

Example of verification that goes into making those monstrosities work:


So, yeah, getting to that level of performance would be really hard work. The good news is that modern processors, esp x86, are lots of baggage that drains performance that we don't need. Simpler cores in large numbers with accelerators can be much easier to design and perform much better. Like so:


Now, that's 28nm for sure. Point remains, though, as Cavium didn't have nearly the financial resources of Intel despite their processors smoking them in a shorter amount of time. Adapteva's 64-core Epiphany accelerator was likewise created with a few million dollars by pro's and careful choice of tooling. So, better architecture can make up for the lack of speed that comes from full-custom.

Here's a nice intro by Adapteva:


Far as cost, it depends on how you do it. There's three ways to do it:

1. FPGA-proven design done by volunteers that's ported to a Structured ASIC by eASIC or Triad Semiconductor.

2. Standard Cell ASIC that's done privately.

3. Standard Cell ASIC that's done in academia whose OSS deliverables can be used privately.

Option 1 will be the cheapest and easiest. An example of these are here:



These are a lot like FPGA's, although Triad adds analog. The idea is there's a bunch of pre-made logic blocks that your hardware maps to. Unlike FPGA's, the routing is done with a custom layer of metal that only includes (or powers) necessary blocks. That lets it run faster, with less power, and cheaper. "Cheaper" is important given FPGA vendors recover costs with high unit prices.

The S-ASIC vendors will typically have premade I.P. for common use cases (eg ethernet) and other vendors' stuff can target it. Excluding your design cost and I.P. costs, the S-ASIC conversion itself will be a fraction of a full ASIC's development costs. I don't know eASIC's price but I know they do maskless prototyping for around $50,000 for 50 units. They'll likely do a six digit fee upfront with a cut of sales, too, at an agreed volume. Last I heard, Triad is currently picky about who they work with but cost around $400,000.

Option 2 is the easier version of real-deal: an actual ASIC. This basically uses EDA tools to create, synthesize, integrate, and verify an ASIC's components before fabbing them for real testing. The tools can be $1+ mil a seat. Mask & EDA costs are the real killer. Silicon itself is cheap with packaging probably around $10-30 a chip with minimum of maybe 40 chips or so. Common strategies are to use smart people with cheaper tools (eg Tanner, Magma back in day), use older nodes whose masks are cheaper (350nm/180nm), license I.P. from third parties (still expensive), or build the solution piecemeal while licensing the pieces to recover costs. Multi-project wafers (MPW's) to keep costs down. What that does is split a mask and fab run among a number of parties where each gets some of the real estate and an equivalent portion of cost. 350nm or 180nm are best for special purpose devices such as accelerators, management chips, I/O guards, etc that don't need 1GHz, etc. 3rd-party license might be no go for OSS unless it's dual-licensed or open-source proprietary. Reuse is something they all do. All in all, on a good node (90nm or lower), a usable SOC is going to cost millions no matter how you look at it. That said, the incremental cost can be in hundreds of thousands if you re-use past I.P. (esp I/O) and do MPW's.

Company doing MPW with cool old node + 90nm memory trick on top:


Option 3 is academic development. The reason this is a good idea is that Universities get huge discounts on EDA tools, get significant discounts on MPW's at places like MOSIS fabrication service, and may have students smart enough to use the tools while being much cheaper than pro's. They might work hand-in-hand with proprietary companies to split the work between them or at least let pro's assist the amateurs. I've often pushed for our Universities to make a bunch of free, OSS components for cutting edge nodes ranging from cell libraries to I/O blocks to whole SOC's. There's little of that but occasional success stories. Here's two standard cell ASIC's from academia: a 90nm microcontroller and (my favorite) teh 45nm Rocket RISC-V processor which was open-sourced.



Note: Those papers will show you the ASIC Standard Cell process flow and tools that cane be involved. The result was awesome with Rocket.

So, enough academics doing that for all the critical parts of SOC's could dramatically reduce costs. My proposal was to do each I/O (where possible) on 180nm, 90nm, 45nm, and 28nm. The idea being people moving their own work down a process node could just drop-in replacements. The I/O and supplementary stuff would be almost free so that let's developers focus on their real functionality.

My other proposal was a free, OSS FPGA architecture with a S-ASIC and ASIC conversion process at each of the major nodes. Plenty of pre-made I.P. as above with anyone able to contribute to it. Combined with QFlow OSS flow or proprietary EDA, that would dramatically reduce OSS hardware cost while letting us better see inside.

Archipelago Open-Source FPGA http://www.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-43...

Note: Needs some improvements but EXCITING SHIT to finally have one!

Qflow Open-source Synthesis Flow http://opencircuitdesign.com/qflow/

Synflow open-source HDL and synthesis http://cx-lang.org/

Note: I haven't evaluated or vetted Synflow yet. However, the I.P. is the only ones I've ever seen for under $1,000. If they're decent quality, then there must be something to their method and tools, eh?

So, there's your main models. Both commercial and academic one might benefit from government grants (esp DARPA/NSF) or private donations from companies/individuals that care about privacy or just cheaper HW development. Even Facebook or Google might help if you're producing something they can use in their datacenters.

For purely commercial, the easiest route is to get a fabless company in Asia to do it so you're getting cheaper labor and not paying for full cost of tools. This is true regardless of who or where: tools paid for in one project can be reused on next for free as you pay by year. Also, licensing intermediate I.P. or selling premium devices can help recover cost. Leads me to believe open-source proprietary, maybe dual-licensed, is the best for OSS HW.

So, hope there's enough information in there for you.

Thank you for all that. Reminds me about something Alan Kay recently mentioned in a talk (I think he mentions it a lot) - used to be, universities made their own computers. The whole thing, architecture and all. Silly expensive. But now everyone uses the same crap (because it's proven crap, and because we've got software that runs on it, and because it's practically free compared to building your own). It's a sad state of affairs.

The footnote link [13] in section "The audio card" is an unrelated footnote, the correct one appears to be missing.

I really could do without "considered harmful" titles. x86 has been one of the most influential technologies of all time and a clickbait title doesn't do it justice imo.

You should write a paper explaining your views. And title it "'Considered Harmful' Considered Harmful".

> And title it "'Considered Harmful' Considered Harmful".


[edit: clarified context]

This isn't an "essay", nor is it "axe grinding". It's one of the best current available surveys on X86 platform security.

Go to SCHOLAR.GOOGLE.COM and search for "* considered harmful". Most of what Meyer has to say about "considered harmful essays" don't apply to these papers.

As far as academic articles go, "* considered harmful" is probably as vague and bombastic (read: clickbaity) as a title gets (perhaps after 'Ron was wrong, Whit is right'). Personally I'd prefer a more descriptive title, like 'A survey of weaknesses and attacks on the x86 platform'. But then again, I'm a boring kind of person.

I was merely referring to the 'And title it "'Considered Harmful' Considered Harmful".' part of the parent post.

"Considered Harmful" is a signifies that the text will be forceful and opinionated. I have nothing against balance and thoroughness, but many authors confuse those qualities with mealy-mouthedness and passive-aggressiveness. When I see "Considered Harmful" I know that the text will have a conclusion that isn't basically "I see some pro's and con's but I don't know guys, maybe more research is needed?".

Yes, "Considered Harmful" articles may be a little tactless and imbalanced, but they are usually also concise, honest, informative and funny. Those qualities are important to me.

"Considered harmful" is like a Betteridge-baiting headline. It is important to remember that in the original article, "GO TO considered harmful", Dijkstra a) didn't choose the title; and b) was wrong about GO TO; it is tremendously useful under the right circumstances with no real substitute (except maybe for TCO). Yet how many times is "don't use goto" cited as an iron rule of programming?

So when I see "considered harmful" I've got my eye open for a potential thought-stopper, whether deliberately created or not.

Yes, while the utterance itself is general in nature, I object to (ab)using it outside of discussing software language features (as it was originally used).

I could even live with the title if the article discussed a certain _aspect_ of a programming language(like Optional in Java 8, etc) to signal a certain vibe of the argument.

But when used wrt a microprocessor/hardware platform, it feels really, really forced. Not the end of the world, but still...

My main problem is with the term x86 because the article conflates form-factor issues (people can spy on you with the microphone in your phone, even the old analog telephone or tablet or game controller, Amazon Echo,...) with x86 issues (some real problems with the architecture in general) and the ME issue which is an intel thing.

> and the ME issue which is an intel thing.

> But is the situation much different on AMD-based x86 platforms? It doesn’t seem so! The problems related to boot security seem to be similar to those we discussed in this paper. And it seems AMD has an equivalent of Intel ME also, just disguised as Platform Security Processor (PSP)

The PSP is still a processor with elevated privileges, but it doesn't seem to have the ability to drive the network interface.

But she's right insofar as that x86 vendors are either in on this (mostly to satisfy the DRM-hungry Hollywood connection - most of these features have "DRM" written all over them, not "user security") or irrelevant (Via still ships its 20 slow x86 CPU samples per year that nobody wants, probably to avoid losing their x86 license).

So were PHP and goto statements.

How influential something is has nothing to do with how good it is.

goto is just a mnemonic for jmp. It's the primitive from which all higher level control flow is ultimately derived. It isn't harmful, and it's used a lot even in C.

You're missing the reference - Dijkstra wrote a famous letter on GOTO in 1968 which was published as "Go To Statement Considered Harmful":


In context, it was a piece advocating against the use of GOTO to the exclusion of all other control structures (e.g, 'for' or 'while' loops, etc).

I appreciate you thinking I'm a buffoon who was born yesterday and hasn't heard of EWD215, but it appears your reading comprehension is, to be charitable, iffy.

wyager's statement, involving PHP (for which there is not a famous "considered harmful" essay to the best of my knowledge, though there is "A Fractal of Bad Design") and goto statements, was a rather clear implication that both constructs are innately harmful in an attempt to counter n0us' assertion that influential/popular technologies imply a high quality. There was nothing said about using goto statements in presence of structured programming, but merely goto as an intrinsic badness. This is a common belief cargo culted by a many naive commentators and XKCD readers who do not realize that all control flow is derived from goto, and moreover that even in some languages with structured control flow it is still useful, e.g. for resource cleanup and breaking out of nested loops.

>Be civil. Don't say things you wouldn't say in a face-to-face conversation. Avoid gratuitous negativity.


The comment being replied to was also rather uncivil by a certain definition, assuming the commenter was unfamiliar with what is by now very cliched literature and explaining it in a somewhat condescending tone, as if to a child, when in truth there was a very substantive point being made.

My statement was neither something I wouldn't say face-to-face, nor gratuitously negative.

It comes off as unpalatable, to say the least. You were assuming malice in the other poster and then went on a minor tirade without sufficient prompting.

Isn't that more aptly stated as "implemented with" not "derived from"?

Assuming a von Neumann or modified Harvard architecture where execution advances from an incremented program counter, I'd say derived from, though it may be that the former is more appropriate. It is certainly not universal, I do not make that claim.

The program counter doesn't typically appear in the programming languages that have (or don't have) goto. You're talking about implementation, but I think people typically criticize goto in terms of semantics (iirc, I can include Djikstra in that camp, but I never made a super-careful study of that paper).

It's a matter of scale. If you're trying to compute absurdly large numbers, you'd be a fool to use addition, even if it is fundamental to some other operation you want to use. Goto is problematic not because it can't be used effectively, but because it won't be. Because it doesn't encapsulate a powerful enough abstraction to make computers smarter or programs easier to write and understand.

If you have to write a goto, you can drop into assembly. Don't add it to your high-level language, because it doesn't add anything there, it just gets in the way.

"It's the primitive from which all higher level control flow is ultimately derived."

There are a billion alternative primitives from which you could derive all the same things. Goto is not special. And it is so primitive, it is not hard to write something else and have a compiler translate it. You shouldn't need goto anymore than you should need access to registers.

That largely misses (Dijkstra's, originally) point. How control flow is implemented at a low level (e.g. jmp/lngjmp) is a completely separable from how it should be exposed in a language.

Wearing a c programmers' hat you may say "absolutely", a scheme programmers' hat, perhaps "no way". Horses for courses after all.

That's a more nuanced position than the one that is commonly understood, however. Though, you are always bound to your architectural model, so exploiting it more directly is not innately bad. Layers tend to be leaky.

Wearing a Scheme programmers' hat, call/cc isn't any less of a landmine.

> [goto] is the primitive from which all higher level control flow is ultimately derived

I'm pretty sure you cannot implement conditional branches using unconditional branches as a building block. Unless you count indirect branches, which goto usually doesn't support.

Not that I'd recommend it, but sure you can. Just allow modification of the code to include where you want to jump to. :)

Scarily enough, I think this used to actually be somewhat common place and is why many functions were not reentrant.

The PHP bashing on this site is untenable. PHP has no intrinsic properties which stop a good programmer from writing elegant code for web applications. It's a tired discussion, I know, but the most you can say is that it's frequently abused. The oft cited "Spectacle of bad design" essay has been credibly rebutted point-by-point by other authors.

When some insist to use C++/JavaScript/PHP/MySql/Mongo/etc (tools with bad design/complexity/bug-prone/etc flaws) with the excuse that is possible to use them "well" if only we are more "disciplined" and "pay attention"?

When bad tools are bad, discipline is not the answer. Is fix the tool, or get rid of them.

Why developers understand that if a end-user have a high-error rate in one program is a problem with the program but when that happend with a language/tool for developers... not think the same???

"Good programmer" is almost a keyword in this context as "someone with the experience with for workaround and avoid the pitfalls that a tool is giving to him, plus also do his job" when is better if "someone that can concentrate in do his job".

Of course, workaround the pitfalls of tools is unavoidable in a world where "worse is better" have win. But why persist on this?

> Of course, workaround the pitfalls of tools is unavoidable in a world where "worse is better" have win. But why persist on this?

Because the unstated alternative is a false choice. It would be nice if all of the code written in poorly designed languages would disappear and be replaced with code in better designed languages, but that isn't realistic. Migrating a large codebase to a different language is very expensive and introduces fresh bugs, less popular languages aren't supported by all platforms and libraries, and large numbers of people have made significant time investments in learning languages that are popular even if they aren't very good. So the old languages aren't going away.

Given that, it's better that we teach people the pitfalls of the things we're stuck with, and improve them with things like std::unique_ptr in C++ or safer SQL APIs that discourage manual parsing of SQL statements, than to pretend that there is no middle ground between continuing the tradition of bad code and the fantasy of rewriting essentially all existing code from scratch overnight.

Yeah, short term is the right thing. But I'm talking about why stay in the same loop DECADES? Because the pile of mud is bigger with each iteration. Eventually, I think, the cost of a clean start will be far less that push ahead.

I don't underestimate the problem (I work in the LESS progressive area of programming: Internal Business apps / apps for non-startups, non-sexy-games-chat-scalable-apps!) so I'm full aware...

But what drive me crazy is that is developers that defend their tools as "them are good! why bother!", not because them use the business/cost defense...

So, yeah... let's not rewrite everything that is working right now. But also, a lot of time we can choose what to use, special for new projects... at least pick well next time...

> Yeah, short term is the right thing. But I'm talking about why stay in the same loop DECADES? Because the pile of mud is bigger with each iteration. Eventually, I think, the cost of a clean start will be far less that push ahead.

The pile of mud has network effects. Even when you're starting from scratch, you're not really starting from scratch. The world is built around the things that are popular. Everything is better supported and better tested for those things. If you create a new language, it not only needs to be better, it needs to be so much better that it can overcome the advantages of incumbency. Which is made even harder when the advantageous characteristics of new languages also get bolted onto existing languages in a way that isn't optimal but is generally good enough that the difference ends up smaller than the incumbency advantage.

Which is why change happens very, very slowly. We're lucky to be essentially past the transition from Fortran and COBOL to C, Java and C++.

Actually, if you start with good measurements, many clear reasons appear that PHP is a terrible language. A nice write-up is here:


It's amazing how much has been done in what was essentially a pile of hacks on top of a pre-processor making it pretend to be an application language. That pro's avoid it and its wiser users almost always leave it eventually further corroborates the author that it's fundamentally bad. If anything, it's one option among better ones (Python, Ruby) for non-programmers to get started in web apps. Little reason to use it at this point with all the 3rd party components and communities around better languages.

I don't think it's a nice write up, in fact it arguably miscalculates every point it makes. It's exactly the article I was referring to in my original comment as having been thoroughly refuted, in (correct) anticipation that someone would bring it up, since it seems to be the only go-to source for PHP bashers.

That's a troll comment if I've ever seen one with even less information than what was in the linked article. Your comment actually has no information: a mere dismissal.

On opposite end, my link was at least clear on attributes of a good language. These were specifically mentioned: predictable, consistent, concise, reliable, debuggable. The author gave specific examples showing PHP lacks these traits. An analysis of Python or Ruby show them to embody these traits much more while also possessing the supposed advantages PHP fans tell me including easy learning, many libraries, huge community, tools, etc. So, the evidence indicates PHP is a poorly designed language (or not designed at all) while some competitors are well-designed languages with most of same benefits.

Other authors say about the same about both philosophy and specific details showing why PHP is a pain to work with if you want robust software along with building skills a good developer should have.




Truth be told, though, the burden of proof is on you PHP supporters to show why PHP is a good language and people should use it. I claim it was a mere pre-processor that had all kinds of programming language features bolted onto it over time to let it handle certain situations. That's not design at all. Python and Ruby are designed languages with consistency, core functionality for many situations, extensions/libraries, and optionally the ability to pre-process web pages. World of difference in both language attributes and quality of what people produce with them. So, not only have you presented no evidence of PHP's alleged good design, I've presented evidence against it and for two competitors have better designs.

Feel free to back up your claims with some actual data rather than dismiss whatever data anyone else brings up. I mean, you want to dismiss the guys ranting feel free. Can even edit all that crap out to leave just the data and arguments. Same for other links. Resulting text still supports our claims against PHP. So, status quo among professionals should be "PHP Is Garbage" that leads to buggy, hard to maintain, insecure, slow software. It will remain until PHP's community proves otherwise and demonstrates their counter-claim in practice with apps possessing the opposite of those negative traits.

It's hard to take seriously a statement about an article, where even the referenced title is wrong.

It's not nit picking - it shows how one didn't even took them time to read and understand the article; it's "fractal" of bad design, and it's named so for a specific reason.

>PHP has no intrinsic properties which stop a good programmer from writing elegant code for web applications.

Nor does x86 assembly. What is your point?

The currently prefered clickbait title on HN would be "x86 is the new goto"

Really? I only see one of those, from 5 years ago:


I meant generic X is the new Y; "goto" was just a nod to Dijkstra's.

In other words, "is the new goto" is the new "considered harmful"?

$BROWSER_NAME is the new IE6.

Should also be noted that the link mentions that the paper contains no new attacks - the title is misleading in this context with the new paper qualifier.

Neither of these are valid criticisms.

Yours first: it is a new paper. It was just released. It has an "October 2015" dateline. It isn't a variant of any previous paper she's released. It's also a very good paper.

Second: this isn't a blog post. It's not a news site. It's a research paper. She gave it a title that follows a trope in computer science paper titles. It's silly to call it "clickbait".

As someone who's had the misfortune of going toe-to-toe with Rutkowska over details of the X86 architecture, let me gently suggest that whether she knows what she's talking about and what she's trying to say [isn't] really a fight you want to pick.

That wasn't what I was criticizing - I was criticizing the title on HN. It previously said (new paper). While that is true, in this context, it is actually a summary of existing information.

I was not criticizing the quality of information in the paper or article. I was criticizing the summary previously displayed on HN before it was changed, which suggests that someone agrees with me.

I'm lost. This is a new paper. What's the argument?

It's a new paper that summarizes - the previous title was "Intel x86 considered harmful (new paper)". It is very easy to draw an inference that a new revelation to consider the Intel x86 is harmful has come from that title - that was my only problem. I enjoyed reading the article.

It was a narrow complaint about the title as submitted to HN - the current title "Intel x86 considered harmful – survey of attacks against x86 over last 10 years" is a lot more insightful as to the nature of the article, and less inflammatory (although I'd guess that it was unintentional).

It's called a survey paper. In this case, the survey is particularly valuable, because the stuff in it was scattered across blog posts and conference presentations --- many of them by the author of the survey.

Just not a great critique going on in this subthread.

I think you're completely missing the point...the original title on HN did not have any of that information - it just said "Intel x86 considered harmful (new paper)". No context that it was a survey paper - initial impressions was that it was just another clickbait inflammatory article link.

The moderators rightfully changed it, which makes my criticism addressed & outdated.

    > whether she knows what she's talking about and what 
    > she's trying to say is really a fight you want to pick
Did you mean to say: "ISN'T really a fight you want to pick"?

I am genuinely curious: Can you not figure this out by the context alone (hint:misfortune)? Or are you going "big-game hunting on HN" and nitpicking tptacek's comment?

Don't they know considered harmful essays are considered harmful? http://meyerweb.com/eric/comment/chech.html

https://news.ycombinator.com/item?id=10223645 "We've adjusted the dupe detector to reject fewer URLs...[snip/].. Allowing reposts is a way of giving high-quality stories multiple chances at making the front page. Please do this tastefully and don't overdo it."

Considering that this second post got much more traction than the first, I don't see anything wrong.

In this case the same submitter posted two versions of the story:



Not a great approach; one ought just to pick the better of the two, which in this case is the html, because it gives more background, loads faster, and links to the pdf.

General remark: I doubt that we'll make the dupe detector sophisticated enough to catch a case like this, but I do think we'll add software support for users to identify dupes when they see them. That's what happens informally already (as you all did in this thread, and by flagging the other post) so the shortest path to better deduplication for HN seems to be: write software to make community contribution easy. Also I kind of like the idea of giving a karma point to the first user who correctly links a given pair of posts.

I don't think posting two links 1 minute apart was the idea behind this rule change.

Both links were submitted within a minute of each other and were both on the front page at the same time.

Both were submitted by the same person


Confused? The entire world uses x86 for almost everything. A couple of consumer products don't, namely some phones.

Not confused, just hipster.

"I use a really obscure instruction set, designed by a bunch of ex-Peruvian monks, working from a bedsit in Shoreditch. You've probably never heard of it."

I really had to do that once. Reverse-engineer a wifi driver in an embedded board, some niche RISC instruction set invented by god knows who. Invented a disassemble-annotate-repeat tool called GOLEM based entirely on bitmap-pattern scripts, that would produce a listing. You could edit the listing to include symbolic names for code points and data, then re-run the tool and it would use those names (instead of hex addresses) in the new listing (built a symbol table iteratively). Ultimately I had complete source for the firmware again.

is GOLEM available publicly and under an open source license?

I asked; they're three source control systems further down the road (VSS => ClearCase => SVN => Git). My friend couldn't find it. Sigh.

I wish. It was a project I did as a contractor. That company has changed source control systems twice since then, and been bought. I guess I should have kept a copy.

Practically everyone here.

I would suspect that practically everyone here is using x86-64 in fact.

And all the issues mentioned in there are in the x86-64 versions of Intel CPUs, as well. Some of them may be only available in x86-64 CPUs.

Are you kidding ?

For all of its shortcomings, I still would pick x86 over mips (which is truly horrendous, or maybe it's just once you go cisc you never take the risc) any day of week

Nothing in this paper has anything to do with the ISA.

But in practice if you deal with x86 even AMD isn't that much better.

I know. But I'm saying that everyone can flame on it as much as anyone wants, but in the end everyone will end up coming back to it.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact