
Intel x86 considered harmful – survey of attacks against x86 over last 10 years - chei0aiV
http://blog.invisiblethings.org/2015/10/27/x86_harmful.html
======
pjc50
"System management mode" is a tremendous wart and should be removed wholesale,
with Intel adopting a more ARM-style trusted boot chain with explicit
cooperation from the OS or hypervisor. And while you're at it, kill UEFI and
install a pony for me.

(Seriously, SMM serves either bizarre ILO features that high-end vendors like
but are rarely used, or security agencies looking for a layer to hide in.)

~~~
rwmj
Actually ILO is pretty useful :-)

I have an APM (ARM64) Mustang, and this takes a rather different approach, but
probably not one you'll think is better. The chip advertises 8 x 64 bit cores,
but there's a 9th 32 bit core which runs all the time, even when the machine
is powered down (although obviously still connected to mains power). It runs a
separate firmware, in its own RAM, but can access the main memory at will and
invisibly to the main OS.

One way to look at this is it's brilliant that we can just put a tiny
Cortex-M3 in a spare bit of silicon and have it do useful management stuff.

~~~
pjc50
_It runs a separate firmware, in its own RAM, but can access the main memory
at will and invisibly to the main OS_

All watched over by hypervisors of loving grace.

How do you know what the firmware does? Is it even possible to inspect it, let
alone replace it? It's just another part of the attack surface - not
necessarily deliberately, but if there are exploitable bugs in that firmware
that can be triggered from the rest of the system, it's another security risk.

~~~
rwmj
It's possible to update it, not sure about replacing it with ones own code. I
know this is "whataboutism" but here goes: Is this different from Intel ME
processors with their "hidden" Sparc core?

[http://www.slideshare.net/codeblue_jp/igor-skochinsky-
enpub](http://www.slideshare.net/codeblue_jp/igor-skochinsky-enpub)

~~~
msbarnett
> Is this different from Intel ME processors with their "hidden" Sparc core?

Minor quibble: The IME is not Sun's _SPARC_ architecture, it's ARC
International's ARC, the Argonaut RISC Core, which has its origins in (of all
things) the Super Nintendo's SuperFX chip.

~~~
nickpsecurity
Didn't even know they had ARC processors in them. That's a trip.

------
kragen
Probably worth pointing out that the author is the project lead of Qubes, one
of the very few promising projects in the vast wasteland of computer security.

~~~
kachnuv_ocasek
Very few? Seriously?

~~~
kragen
Seriously. The vast majority of computer security effort is wasted on things
like the advisory-and-patch cycle, pen testing, and virus scanning, which can
never, by their very nature, provide computer security. That's not to say you
don't have to do them — it's just that they're not _productive_.

~~~
kachnuv_ocasek
Oh, I understand now and I agree wholeheartedly.

There's been some exciting progress in the formal verification department in
recent years, though.

~~~
kragen
I agree!

------
petecooper
The book referred to by the article -- `Platform Embedded Security Technology
Revealed` -- appears to be available for download at no cost right now[1].
Pricing error or not, I've just completed checkout without issue.

For completeness, I have no affiliation or correction with Apress -- please
consider this a heads-up.

[1] [http://www.apress.com/9781430265719](http://www.apress.com/9781430265719)

~~~
pgeorgi
It's free for months, so probably not a mistake.

------
jbob2000
So I read the blog post and skimmed the PDF and I'm left with some questions.
IF these security issues have been present for 10 years, but there hasn't been
any widespread malicious action on them, are they really issues?

To create an analogy, my car doesn't have bullet proof glass, someone could
easily shoot it up and i'd be dead. But nobody really goes around shooting up
cars, so is it an issue?

~~~
pgeorgi
The problem is that if you're trying to build a secure computing environment
(like Joanna is with Qubes OS), you run into limitations all the time.

Those platform issues may not be a problem for Jane Doe on Windows 10, but
when users decide that they need more security than that (and Qubes points in
the right direction, although there's still some miles to go) they may have a
reason (or just paranoia).

In either case, they won't be very happy with the sad state that is x86
"security" because there are way too many places where an undue trust into
Intel is implied.

Eg. the SGX feature, which can run userland code in a way that even the kernel
(or SMM) can't read it: The keys are likely mediated by the Management Engine
(ME) - which also comes with network access and a huge operating system (for
the purposes of an embedded system: the smallest version is 2MB) that you, the
user, can't get rid of.

So who's SGX protecting you from if you fear involvement by nation state
actors? x86 isn't for you in that case (Intel's version in particular, but
pretty much all alternatives are just as bad) - and that's what this paper
points out.

~~~
pdkl95
Intel describe[1] SGX as a feature designed to "enable software vendors to
deliver trusted[2] applications", where applications would "maintain
confidentiality even when an attacker has physical control of the platform and
can conduct direct attacks on memory".

This already suggests the owner of the CPU isn't who they are protecting, but
it gets worse (even before we consider the risk from AMT). Starting an SGX
enclave seems to require[3] a "launch key" that is only known by Intel,
allowing Intel to control what software is allowed to be protected by SGX.

[1] [https://software.intel.com/en-
us/blogs/2013/09/26/protecting...](https://software.intel.com/en-
us/blogs/2013/09/26/protecting-application-secrets-with-intel-sgx)

[2] Before the term "DRM" was coined, the same crap used to be called "trusted
computing" (back when Microsoft was pushing Palladium/NGSCB)

[3] [https://jbeekman.nl/blog/2015/10/intel-has-full-control-
over...](https://jbeekman.nl/blog/2015/10/intel-has-full-control-over-sgx/)

~~~
Spivak
This kind of feature would be amazing for security if it wasn't going to be
immediately abused with DRM encumbered vendors, MS, and vague yet menacing
government agencies trying to lock users out of their own devices.

If I could provide all the keys my machine could be completely locked down and
damn near impossible to break into even with complete physical access and a
ECE degree.

~~~
derefr
One thing we would actually want, here, though, is a setup where you can rent
out your computer (i.e. as an IaaS provider), without being capable of
monitoring the renter. In that kind of setup, the tenant does not want you to
own "all the keys to your machine"—or, at least, they want to have some way to
verify that you have disabled/discarded those keys.

~~~
andreasvc
I don't see the point of this. Either you trust your cloud provider, or you
don't put it in the cloud. You could think of a technical solution to prevent
monitoring, but how can you ever be sure that your provider has actually
implemented it? Plus, I don't think providers would want something like this;
if there's something gravely illegal going on, you want to be able to know and
ban that user from your service.

------
seccess
An interesting point I read awhile ago (wish I could find the article) is that
variable-length instruction sets (like x86) are preferred by authors of
malicious software over fixed length sets because the binaries are harder to
analyze. That is because in variable length ISAs, you must use a recursive
decent parser to find all code paths in the program, because jump targets are
specified in bytes or words instead of discrete instructions. This allows
someone to jump into what might be the data portion of an instruction when
parsed one way, and now the behavior totally changes because the bytes are
being interpreted another way.

~~~
kaesve
Are you talking about "The Geometry of Innocent Flesh on the Bone: Return-
into-libc without Function Calls (on the
x86)"([https://cseweb.ucsd.edu/~hovav/dist/geometry.pdf](https://cseweb.ucsd.edu/~hovav/dist/geometry.pdf))?
Great paper with a great title.

~~~
seccess
I wasn't, I'm pretty sure it was a blog post not an academic paper.
Regardless, this paper looks awesome thanks for the link :D

------
Symmetry
Hmm, my favorite vulnerability is x86's lack of self-synchronization meaning
that the same byte-stream could be two different streams of valid instructions
depending on where you start reading.

[http://mainisusuallyafunction.blogspot.com/2012/11/attacking...](http://mainisusuallyafunction.blogspot.com/2012/11/attacking-
hardened-linux-systems-with.html)

~~~
rockdoe
This technique is also used to render W^X useless.

------
Animats
There just have to be backdoors built into the Intel Management Engine. Intel
won't disclose what code it executes, so we have to assume there's a backdoor.
The question is, _whose_ backdoor.

It would be useful to install some honeypot machines which would appear to be
interesting to governments (an ISIS bulletin board, for example) and record
every packet going in and out.

~~~
nickpsecurity
This is why I laugh about people here that laugh about backdoors in their
TRNG, etc. Intel's _been_ backdoored for AMT, etc for a while. Those circuits,
due to NRE costs, have to be in most of their chips whether they advertise
them or not. They have deep read access into everything in the system with who
knows what write access. We also know some of their chipsets have radios in
them which might be in the others, permanently or temporarily disabled.

Just a huge black box of interconnected black boxes at least one set of which
is definitely a backdoor. And worst thing is I heard it can work when the
machine is entirely or somewhat powered down. (!) I don't know for sure
because I won't buy one lol. The old stuff less likely to have those features
works fine for me with my builds.

Gaisler's stuff and RISC-V are best hope as they're both open hardware plus
getting fast. Gaisler's are already quad-core with as much I.P. as people
could ever use. Anyone wanting trustworthy hardware knows where to start on
building it. CheriBSD on CHERI capability processor is also open-source and
can run on a high-end FPGA. So, there's that for use or copying in a Gaisler
modification.

~~~
throwaway7767
> Gaisler's stuff and RISC-V are best hope as they're both open hardware plus
> getting fast. Gaisler's are already quad-core with as much I.P. as people
> could ever use. Anyone wanting trustworthy hardware knows where to start on
> building it. CheriBSD on CHERI capability processor is also open-source and
> can run on a high-end FPGA. So, there's that for use or copying in a Gaisler
> modification.

How can you trust the FPGA? Or the very closed-source bitstream generator
necessary to compile the VHDL/Verilog code?

Assuming you want to manufacture secure processors from these designs, how can
you trust the chip fab?

I'm genuinely interested, as I'm not aware of any research into protection
from these issues.

~~~
nickpsecurity
You have several ways to deal with trust issues in hardware:

1\. Monitor hardware itself for bad behavior.

2\. Monitor and restrict I/O to catch any leaks or evidence of attacks.

3\. Use triple, diverse redundancy with voter algorithms for given HW chip and
function.

4\. Use a bunch of different ones while obfuscating what you're using.

5\. Use a trusted process to make the FPGA, ASIC, or both.

I've mainly used No's 2-4 with No 5 being the endgame. I have a method for No
5 but can't publish it. Suffice it to say that almost all strategies involve
obfuscation and shellgames where publishing it gives enemies an edge.
kerckhoff's principle is wrong against nation-states: obfuscated and
diversified combination of proven methods is best security strategy. Now, ASIC
development is so difficult and cutting edge that knowing that the processes
themselves aren't being subverted is likely impossible.

So, my [unimplemented] strategy focuses on the process, people, and key steps.
I can at least give an outline as the core requirements are worth peer review
and others' own innovations. We'd all benefit.

1\. You must protect your end of the ASIC development.

1-1. Trusted people who won't screw you and with auditing that lets each
potentially catch others' schemes.

1-2. Trusted computers that haven't been compromised in software or
physically.

1-3. Endpoint protection and energy gapping of those systems to protect I.P.
inside with something like data diodes used to release files for fabs.

1-4. Way to ensure EDA tools haven't been subverted in general or at least for
you specifically.

2\. CRITICAL and feasible. Protect the hand-off of your design details to the
mask-making company.

3\. Protect the process for making the masks.

3-1. Ensure, as in (1), security of their computers, tools, and processes.

3-2. Their interfaces should be done in such a way that they always do similar
things for similar types of chips with same interfaces. Doing it differently
signals caution or alarm.

3-3. The physical handling of the mask should be how they always do it and/or
automated where possible. Same principle as 3-2.

3-4. Mask production company's ownership and location should be in a country
with low corruption that can't compel secret backdoors.

4\. Protect the transfer of the mask to the fab.

5\. Protect the fab process, at least one set of production units, the same
way as (3). Same security principles.

6\. Protect the hand-off to the packaging companies.

7\. Protect the packaging process. Same security principles as (3).

8\. Protect the shipment to your customers.

9\. Some of the above apply to PCB design, integration, testing, and shipment.

So, there you have it. It's a bit easier than some people think in some ways.
You don't need to own a fab really. However, you do have to understand how
mask making and fabbing are used, be able to observe that, have some control
over how tooling/software are done, and so on. Plenty of parties and money
involved in this. It will add cost to any project doing it which means few
will (competitiveness).

I mainly see it as something funded by governments or private parties for
increased assurance of sales to government and security-critical sectors. It
will almost have to be subsidized by governments or private parties. My
hardware guru cleverly suggested that a bunch of smaller governments (eg G-88)
might do it as a differentiator and for their own use. Pool their resources.

It's a large undertaking regardless. Far as specifics, I have a model for that
and I know one other high-assurance engineer with one. Most people just do
clever obfuscation tricks in their designs to detect modifications or brick
the system upon their use with optional R.E. of samples. I don't know those
tricks and it's too cat n mouse for me. I'm focused at fixing it at the
source.

EDIT: I also did another essay tonight on cost of hardware engineering and
ways to get it down for OSS hardware. In case you're interested:

[https://news.ycombinator.com/item?id=10468534](https://news.ycombinator.com/item?id=10468534)

~~~
throwaway7767
I guess I still don't follow. Allow me to better specify the threat model I
have in mind:

Consumer wants one computer system that he trusts. Consumer should be able to
get one without having to trust any of the manufacturers or integrators. They
should not be able to subvert the security of the system, assuming the
published code and specs contain no errors. There should be no black boxes to
trust.

Design team wants to make and provide open hardware. They want to service
Consumer, and they want to do it in a way that Consumer does not need to trust
any blackbox processes.

How does this happen? Note that I'm not asking about keeping the VHDL code
secure, how to physically secure the shipment to the fab company, etc. I'm
asking how Consumer, who gets one IC, can verify that the IC matches exactly
with the published VHDL code and contains no backdoors.

It seems you mainly focus on how the design team can minimise the chances of
subversion. That's a much lower bar and not really sufficient in my mind.
There's still too many places to subvert, and the end consumer still needs to
trust his vendor, which is the same situation we have today.

The bit about multiple independent implementations with voting (NASA-style)
sounds extremely expensive and inefficient, but also very interesting for
high-security systems. Are you aware of any projects implementing it for a
general-purpose computer, specifically to prevent hardware backdooring (as
opposed to for reliability)?

UPDATE: To clarify, as wording is important in these kinds of discussions:
When something is described as 'trusted', that's a negative to me, as a
'trusted' component by definition can break the security of the system. We
need a way to do this without 'trusted' components. So when you say 'Use a
trusted process to make the FPGA, ASIC, or both.', that sounds like exactly
what we have today - the consumer gets a black box, and no way to verify that
it does what it's claimed to do. The black box must be 'trusted' because
there's no other way. Me knowing that the UPS shipment containing the mask had
an armed guard does not make me more likely to want to trust the chip.

~~~
nickpsecurity
"Design team wants to make and provide open hardware. They want to service
Consumer, and they want to do it in a way that Consumer does not need to trust
any blackbox processes. How does this happen?"

That was covered here: " I have a method for No 5 but can't publish it.
Suffice it to say that almost all strategies involve obfuscation and
shellgames where publishing it gives enemies an edge."

There are black box processes trusted and checked in my best scheme, though,
with security ranging from probabilistic to strong with some risks. Mainstream
research [1] has a few components of mine. They're getting closer. DARPA is
funding research right now into trying to solve the problem without trust in
masks or fabs. We're not there yet. Further, the circuits are too small to see
with a microscope, the equipment is too expensive, things like optimal
proximity correction algorithms too secret, properties of fabs too varying,
and too little demand to bring this down to so just anyone can do it and
openly. Plus, even tooling itself is black boxes of black boxes out of sheer
necessity due to esoteric nature, constant innovation, competition, and
patents on key tech.

Note: Seeing chip teardowns at 500nm-1um did make me come up with one method.
I noted they could take pictures of circuits with a microscope. So, I figured
circuit creators could create, distribute, and sign a reference image for what
that should look like. The user could decap and photo some subset of their
chips. They could use some kind of software to compare the two. If enough did
this, a chip modification would be unlikely except as a denial-of-service
attack. Alas, you stop being able to use visual methods around 250nm and it
only gets harder [2] from there.

Very relevant is this statement by a hardware guru that inspired my methods
which embrace and secure black boxes instead of go for white boxes:

"To understand what is possible with a modern fab you'll need to understand
cutting edge Lithography, Advanced directional etching , Organic Chemistry and
Physics that's not even nearly mature enough to be printed in any text book.
These skills are all combined to repeatedly create structures at 1/10th the
wavelength of the light being used. Go back just 10 or 15 years and you'll
find any number of real experts (with appropriate Phd qualifications) that
were willing to publicly tell you just how impossible the task of creating
20nm structures was, yet here we are!

Not sure why you believe that owning the fab will suddenly give you these
extremely rear technical skills. If you dont have the skills, and I mean
really have the skills (really be someone that knows the subject and is
capable of leading edge innovation) then you must accept everything that your
technologists tell you, even when they're intentionally lying. I cant see why
this is any better then simply trusting someone else to properly run their fab
and not intentionally subvert the chip creation process.

In the end it all comes down to human and organizational trust, "

Very well said. Still an argument for securing machines they use or
transportation of design/masks/chips. The critical processes, though, will
boil down to you believing someone that claims expertise and to have your
interests at heart. I'm not sure I've even seen someone fully understand an
electron microscope down to every wire. I'll assure you the stuff in any
fabrication process, from masks to packaged IC's, are _much_ more complex.
Hence, my framework of looking at it.

"how to physically secure the shipment to the fab company, etc. I'm asking how
Consumer, who gets one IC, can verify that the IC matches exactly with the
published VHDL code and contains no backdoors."

Now, for your other question, you'd have to arrange that with the fabs or mask
makers. Probably cost extra. I'm not sure as I don't use the trusted foundry
model [yet]. My interim solution is a combination of tricks that don't
strictly require that but are mostly obfuscation. You'd need guards you can
trust who can do good OPSEC and it can never leave your sight at customs. You
still have to trust mask maker, fab, and packager. That's the big unknown,
though, ain't it? The good news is that most of them have a profit incentive
to crank out product fast in a hurry at lowest cost while minimizing any risks
that hurt business. If they aren't attacking or cooperating, it's probably for
that reason.

"how to physically secure the shipment to the fab company, etc. I'm asking how
Consumer, who gets one IC, can verify that the IC matches exactly with the
published VHDL code and contains no backdoors."

That's semi-true. Re-read my model. The same one can protect the consumer with
minor tweaks. That's because my model maps to the whole lifecycle of ASIC
design and production. One thing people can do is periodically have a company
like ChipWorks tear it down to compare it to published functionality. For
patents and security, people will do that already if it's a successful
product. So, like Orange Book taught me long ago, I'm actually securing the
overall process plus what I can of its deliverables. So long as process stays
in check, it naturally avoids all kinds of subversions and flaws. High
assurance design and evaluation by independent parties with skill do the rest.

"The bit about multiple independent implementations with voting (NASA-style)
sounds extremely expensive and inefficient, but also very interesting for
high-security systems. Are you aware of any projects implementing it for a
general-purpose computer, specifically to prevent hardware backdooring (as
opposed to for reliability)?"

It's not extremely expensive: many embedded systems do it. Just takes extra
hardware, an interconnect, and maybe one chip (COTS or custom) for the voting
logic. These can all be embedded. Those of us doing it for security all did it
custom on a per-project basis: no reference implementation that I know of.
There's plenty of reference implementations for the basic scheme under phrases
triple modular redundancy, lockstep, voting-based protocols, recovery-oriented
computing, etc. Look up those.

You can do the voting or error detection as real-time I/O steps, transactions,
whatever. You can use whole systems, embedded boards, microcontrollers,
FPGA's, and so on. The smaller and cheaper stuff has less functionality with
lower odds of subversion or weaknesses. Helps to use ISA's and interfaces with
a ton of suppliers for diversity and obfuscation part. If your targeted, don't
order with your name, address, or general location. A few examples of fault-
tolerant architectures follow. You're just modifying them to do security
checks and preserve invariants instead of mere safety checks, although safety
tricks often help given the overlap.

App-layer, real-time embedded
[http://www.montenegros.de/sergio/public/SIES08v5.pdf](http://www.montenegros.de/sergio/public/SIES08v5.pdf)

Onboard an ASIC in VHDL [http://www.ijaet.org/media/Design-and-analysis-of-
fault-tole...](http://www.ijaet.org/media/Design-and-analysis-of-fault-
tolerant-Microprocessor-based-on-TMR-using-VHDL-Copyright-IJAET.pdf)

FPGA scheme
[http://crc.stanford.edu/crc_papers/yuthesis.pdf](http://crc.stanford.edu/crc_papers/yuthesis.pdf)

A survey of "intrusion-tolerant architectures" which give insight
[http://jcse.kiise.org/files/V7N4-04.pdf](http://jcse.kiise.org/files/V7N4-04.pdf)

"To clarify, as wording is important in these kinds of discussions: When
something is described as 'trusted', that's a negative to me, as a 'trusted'
component by definition can break the security of the system."

Oops. I resist grammar nazi's but appreciate people catching wording that
really affects understanding. That example is a mistake I intentionally try to
avoid in most writing. I meant "trustworthy" and "trusted" combined. You can't
avoid trusted people or processes in these things. The real goal should be to
minimize amount of trust necessary while increasing assurance in what you
trust. Same as for system design.

"Me knowing that the UPS shipment containing the mask had an armed guard does
not make me more likely to want to trust the chip."

Sorry to tell you that it's not going to get better for you outside making
sacrifices of above-style schemes which are only probabilistic and with
singificant unknowns in the probabilities. Tool makers, fabs, and packaging
must be semi-trusted in all schemes I can think of. They must be turned into
circuitry at some point. Best mix is putting detection, voting, or something
critical on an older node or custom wiring. What you can vet by eye if
necessary. Can still do a lot with 350nm. Many high assurance engineers use
older hardware with hand-designed software between modern systems due to
subversion risk. I have a survey [3] of that stuff, too. :)

Note: My hardware guru did have a suggestion I keep reconsidering. He said
most advanced nodes are so difficult [4] to use that they barely function at
all. Plus, mods of an unknown design at mask or wiring level are unlikely to
work except most simplistic cases. I mean, they spend millions verifying
circuits they _understand_ , so arbitrary modifications to black boxes should
be difficult. His advice, though expensive, was to use most cutting-edge node
in existence while protecting transfer of design and the chips themselves.
Idea being that subversion of ASIC itself would fail or not even be tried due
to difficulty. I like it more I think about it.

[1]
[https://www.cs.virginia.edu/~evans/talks/dssg.pptx](https://www.cs.virginia.edu/~evans/talks/dssg.pptx)

[2]
[https://www.iacr.org/archive/ches2009/57470361/57470361.pdf](https://www.iacr.org/archive/ches2009/57470361/57470361.pdf)

[3]
[https://www.schneier.com/blog/archives/2013/09/surreptitious...](https://www.schneier.com/blog/archives/2013/09/surreptitiously.html#c1762647)

[4] [http://electronicdesign.com/digital-
ics/understanding-28-nm-...](http://electronicdesign.com/digital-
ics/understanding-28-nm-soc-design-arm-based-cores)

------
justin_vanw
" "Considered Harmful" Article Titles Considered Derivative and Uncreative"

------
nickpsecurity
Best enumeration of x86 security problems I've seen so far. Solid argument to
avoid Intel in security-critical products where possible. :)

------
davesque
The title's dumb but the paper has some good info in it.

------
nn3
To paraphrase Stroustrup:

There are two kinds of systems: the ones that have security holes and the ones
that people don't use.

------
hmage
Yes yes, everyone should move to MCST Elbrus :D

~~~
nickpsecurity
No, use Gaisler's stuff:

[http://www.gaisler.com/index.php/products/ipcores/soclibrary](http://www.gaisler.com/index.php/products/ipcores/soclibrary)

Also SPARC but with plenty GPL. Has a quad-core, too, with all of them
designed to be easily modified and re-synthesized. :)

~~~
e12e
There are a few of these (open architectures) - but does anyone know how much
(ballpark) it'd cost to make something like the Raspberry Pi 2 (ie: a full
SoC, with gig ethernet, usb, hdmi, sata) support? Say 10.000 units?

I'm assuming it'd be expensive, as it doesn't appear anyone's doing it...

~~~
alexforencich
Several million dollars. Everything is extremely expensive - IP licenses, ASIC
layout software licenses, simulation and verification software and possibly
hardware, mask costs, line setup costs, wafer production costs, packaging
costs, testing costs, etc.

~~~
e12e
Ip licences for gpled cpu cores and schemas? Several million doesn't sound
that bad. Means that the bar moves to 100k rather than 10k units (if the goal
is to break even in the short term). And it's tricky to sell 20k units/year
for five years, as the cost to upgrade (clock, ram) would probably be in the
same ballbark aa initial investment.?

~~~
alexforencich
IP licenses for things like analog clock management components/PLLs, analog
Ethernet PHYs, analog serializers and deserializers for HDMI, SATA, USB 3,
etc. These are all mixed-signal components. I am not aware of any open source
designs for any of these for modern ASIC targets. Most open source designs
target FPGAs which already have these components built onto the FPGA itself
(i.e. the open source design uses the module as a 'black box'). These will
probably come in GDSII form (actual layout, not a schematic, RTL, etc.) for a
specific process with a specific foundry. If you want to design those
yourself, then you would have to get additional licenses for analog design and
simulation suites. And you might have to re-spin a couple of times (with
millions of $ in mask costs) on each targeted process technology to get the
kinks worked out.

~~~
e12e
Thank you for clarifying. Basically I thought maybe something like:

[http://opencores.org/donation](http://opencores.org/donation)

already existed - but apparently not (except for targeting FPGAs as you
mention) ?

~~~
nickpsecurity
He addressed your point when he said most of them target FPGA's and often
leverage what's already on them. I'll add that the quality, documentation, and
so on at opencores.org seems questionable given all the complaints I read from
pro's and amateurs alike. Some are good but I'm not qualified to say past what
was ASIC-proven.

The analog stuff he mentioned is _really tricky_ on any advanced node.
Everything is difficult at least. It all needs good tooling that's had around
a billion a year in R&D (Big Three) going back over a decade to get to the
point they are. OSS tooling is getting better, esp for FPGA's. However, open-
source ASIC's are going to happen with open source development model. Like
many great things, they'll be built by teams of pro's and then open-source
_D_. Gotta motivate them to do that. Hence, my development models in the other
post.

~~~
e12e
Right. Which is of course why we have stuff like the NASA/ESA making and
releasing designs - big government projects with highly skilled staff. But
they don't have much interest in releasing a "personal computer" or a "smart
phone" (I'm sure they'd love to have an open hw platform to use for smart
phones and tablets - or work stations and super computers, just that it's not
high up on the list of priorities in the "millions of dollars" budget lists).

[ed: I'm thinking of things like LEON etc - but as mentioned, and as I
understand it, for the ASIC case, maybe not the whole eval board is open. And
it's not really in the same ballpark as the dual/quad multi-GHz cpus we've
come to expect from low-end hard-ware:

[http://www.gaisler.com/index.php/products/boards/gr-cpci-
leo...](http://www.gaisler.com/index.php/products/boards/gr-cpci-leon4-n2x) ]

~~~
nickpsecurity
Oh, let me be clear that any starting point will definitely have more work to
do and will _never_ be in ballpark as top Intel/AMD/IBM CPU's. The reason is
that they use large teams of pro's with the best tools often doing _full-
custom_ HW development. Full-custom means they'll do plenty to improve HDL,
RTL, and even wiring of gates they use. Think of Standard Cell as Java web
applications with full custom being like delivering a whole platform with a
board, firmware, assembler, OS components, and native applications. That's
maybe illustrative of the differences in skills and complexity.

Example of custom design flow
[http://viplab.cs.nctu.edu.tw/course/VLSI_SOC2009_Fall/VLSI_L...](http://viplab.cs.nctu.edu.tw/course/VLSI_SOC2009_Fall/VLSI_Lecture_03.pdf)

Note: Load up this right next to the simple, 90nm MCU PDF I gave you and
compare the two. I think that you'll easily see the difference in complexity.
One you'll be able to mostly follow just googling terms and understand a lot
of what they're doing. You're not going to understand the specifics of the
full-custom flow at all. Simply too much domain knowledge built into it that
combines years of analog and digital design knowledge. Top CPU's hit their
benchmarks using full-custom for pipelines, caches, etc.

Example of verification that goes into making those monstrosities work:

[http://fvclasspsu2009q1.pbworks.com/f/Yang-
GSTEIntroPSU2009....](http://fvclasspsu2009q1.pbworks.com/f/Yang-
GSTEIntroPSU2009.pdf)

So, yeah, getting to that level of performance would be really hard work. The
good news is that modern processors, esp x86, are lots of baggage that drains
performance that we _don 't need_. Simpler cores in large numbers with
accelerators can be much easier to design and perform much better. Like so:

[http://www.cavium.com/OCTEON-III_CN7XXX.html](http://www.cavium.com/OCTEON-
III_CN7XXX.html)

Now, that's 28nm for sure. Point remains, though, as Cavium didn't have nearly
the financial resources of Intel despite their processors smoking them in a
shorter amount of time. Adapteva's 64-core Epiphany accelerator was likewise
created with a few million dollars by pro's and careful choice of tooling. So,
better architecture can make up for the lack of speed that comes from full-
custom.

------
caf
The footnote link [13] in section "The audio card" is an unrelated footnote,
the correct one appears to be missing.

------
n0us
I really could do without "considered harmful" titles. x86 has been one of the
most influential technologies of all time and a clickbait title doesn't do it
justice imo.

~~~
hyperpallium
The currently prefered clickbait title on HN would be "x86 is the new goto"

~~~
dang
Really? I only see one of those, from 5 years ago:

[https://hn.algolia.com/?query=%22the%20new%20goto%22&sort=by...](https://hn.algolia.com/?query=%22the%20new%20goto%22&sort=byDate&dateRange=all&type=story&prefix&page=0)

~~~
hyperpallium
I meant generic _X is the new Y_ ; "goto" was just a nod to Dijkstra's.

------
0xdeadbeefbabe
Don't they know considered harmful essays are considered harmful?
[http://meyerweb.com/eric/comment/chech.html](http://meyerweb.com/eric/comment/chech.html)

------
qntty
Why the second post?

[https://news.ycombinator.com/item?id=10458311](https://news.ycombinator.com/item?id=10458311)

~~~
dhuramas
[https://news.ycombinator.com/item?id=10223645](https://news.ycombinator.com/item?id=10223645)
"We've adjusted the dupe detector to reject fewer URLs...[snip/].. Allowing
reposts is a way of giving high-quality stories multiple chances at making the
front page. Please do this tastefully and don't overdo it."

Considering that this second post got much more traction than the first, I
don't see anything wrong.

~~~
dang
In this case the same submitter posted two versions of the story:

[http://blog.invisiblethings.org/papers/2015/x86_harmful.pdf](http://blog.invisiblethings.org/papers/2015/x86_harmful.pdf)

[http://blog.invisiblethings.org/2015/10/27/x86_harmful.html](http://blog.invisiblethings.org/2015/10/27/x86_harmful.html)

Not a great approach; one ought just to pick the better of the two, which in
this case is the html, because it gives more background, loads faster, and
links to the pdf.

General remark: I doubt that we'll make the dupe detector sophisticated enough
to catch a case like this, but I do think we'll add software support for users
to identify dupes when they see them. That's what happens informally already
(as you all did in this thread, and by flagging the other post) so the
shortest path to better deduplication for HN seems to be: write software to
make community contribution easy. Also I kind of like the idea of giving a
karma point to the first user who correctly links a given pair of posts.

------
pmalynin
For all of its shortcomings, I still would pick x86 over mips (which is truly
horrendous, or maybe it's just once you go cisc you never take the risc) any
day of week

~~~
pcwalton
Nothing in this paper has anything to do with the ISA.

~~~
yuhong
But in practice if you deal with x86 even AMD isn't that much better.

