
Is it time for open processors? - signa11
https://lwn.net/Articles/743602/
======
B-Con
The problem is less about openness itself and more about quality of auditing.
Open isn't a magic bullet. As I understand, these latest CPU vulnerabilities
were in the spec themselves. You could've found them by reading the manual.
See what good that did us.

But I posit that open projects do better with audits than closed because they
are more likely to have open audits as well, and an open audit is harder to
ignore or sweep under the rug if it's inconvenient to business.

~~~
simias
I'd pay good money for an open and _simpler_ platform to run my security-
critical tasks that don't require top performance. Auditing a modern high-end
CPU is probably a huge task even if you have access to the HDL because of how
complex they are. A simpler design (RISC, no out-of-order execution, basic
branch prediction and prefetching...) ought to be fast enough with modern
lithography to browse the web, send emails and do 99% of my daily tasks except
maybe compile C++ or Rust code.

The problem of course is that it's probably a very niche market at the moment.
Most people probably don't really understand what's at stake and might not
even care either way. As such I don't expect that people could manufacture
open CPUs and sell them at a reasonable price. So you'd end up with
overpriced, under-powered CPUs which won't drive adoption.

I can't imagine such a project being successful if it's not backed by a big
company or a government. I kind of wish the European Union would try its hand
at it, after all computing is absolutely critical these days (and it's only
going to get more critical as time passes) yet we're completely dependent on
american and chinese companies to provide us with CPUs. If the EU thought that
Galileo was worthwhile to have our own positioning system certainly it also
makes sense to have our own CPUs for critical tasks?

~~~
lambda
Why give up on performance?

IA-64
([https://en.wikipedia.org/wiki/IA-64#Architecture](https://en.wikipedia.org/wiki/IA-64#Architecture))
already exists, with its explicitly parallel instruction set, which leaves
branch prediction and speculative execution up to the software. So you can
still get many of the performance benefits, but it's under control of the
software, giving a lot more flexibility for being able to mitigate or
eliminate these kinds of issues.

I think that it was probably introduced ahead of its time, and targeted at the
wrong markets, but it's kind of sad that with the Spectre vulnerabilities, we
don't have any way of comprehensively addressing it in software without
jumping through a lot of hoops and applying microcode updates.

~~~
pjc50
I think this is overlooking how unpopular IA64 was.

"Leaves branch prediction and speculative execution up to the software" means
that in practice you need to either use the Intel compilers or hand-optimise
your software to get the benefits, while simply recompiling your legacy
software with GCC or LLVM ends up being disappointingly slow.

~~~
lambda
Oh, yeah, I'm aware of many of the reasons that IA64 failed.

Intel could have solved this problem by contributing to GCC and LLVM instead
of keeping their optimizations proprietary. Then they would also be more
easily auditable as well.

~~~
jasonwatkinspdx
> instead of keeping their optimizations proprietary

There were no magic secret optimizations to release. It just straight up did
not work. They had to add back dynamic branch prediction, and even then the
load store latency was such trash that they had to put ginormous L3 caches on
it to get even close to reasonable performance.

~~~
lambda
That's fair. I have never worked with them, so I don't know the precise
details; but one of the big complaints I've heard was that compilers weren't
ready for them yet, while these days there's now a larger field of open source
compilers and has been a lot more research in parallelism.

~~~
jasonwatkinspdx
> compilers weren't ready for them yet

Yeah, compilers today are no better. We found the limits of statically
scheduled parallelism pretty fast. On code that uses static scheduling, a
modern OoO processor can easily duplicate what IA64 was capable of (and a
pipelined loop using AVX will utterly smoke it), while being far better at all
the stuff IA64 failed at.

------
mar77i
Friend of mine went to Hong Kong (from Europe) to work on exactly this:
[http://libresilicon.com/](http://libresilicon.com/)

~~~
amelius
What is the approach? Are they starting a new fab? Or will they simply lower
costs by batching multiple projects together?

~~~
mar77i
They work on their collection of processes to produce these things, as to
basically have non-proprietary way to produce chips from scratch. The other
thing I know is that the guy at least has been involved in other layers of
production, namely generation and validation of circuitry.

The website I posted links a few pdfs for more info.

~~~
itchyouch
I think the challenge will ultimately be the scale and price at which open
processors would be available at for the target market. It isn't just a
processor, but will potentially need a motherboard, chipset, and various
hardware configurations depending on the customer.

There would need to be significant player buying these processors to make them
viable for some smaller entities to be able to piggy back and purchase as
well.

------
therealmarv
Open does not mean better. This is not how our world works. Would be nice to
have but I assume every Intel chip I can buy (even with the throtteling) will
be more powerful and cheaper than any open processors. There is also much more
than only architecture this thing (this works in software but not in
hardware)... this would only work if a big player like AMD or Intel jumps on
this ship (but it would be like giving away intellectual property).

~~~
DarqWebster
If one only considers power / speed and cost, then open will certainly mean
worse.

However a processor with verifiable functionality has value. It's more
trustworthy. It can be checked for accidental, or deliberate, security flaws.

In many scenarios, I don't care how fast a processor is, if it's leaking data
then it's worthless.

~~~
therealmarv
For me this is an extreme view. If your security demands are that high I'm
sure you will find even nowadays CPUs which are in that level of
"trustworthyness" you want. One funny example: You can use a Raspberry Pi
which is not affected by Meltdown or Spectre ;)
[https://www.raspberrypi.org/blog/why-raspberry-pi-isnt-
vulne...](https://www.raspberrypi.org/blog/why-raspberry-pi-isnt-vulnerable-
to-spectre-or-meltdown/)

~~~
DarqWebster
I understand that security is difficult, and that one can never be completely
secure.

But is wanting to know what my processor is doing, or wanting it to be free of
undocumented, obfuscated, proprietary code that runs at a higher priority than
any software, really that extreme a view?

We base so much of human progress on these little wafers of silicon, it
shouldn't be extreme to want to know what they do.

~~~
gatmne
>We base so much of human progress on these little wafers of silicon, it
shouldn't be extreme to want to know what they do.

Well said. I'll add that security is a threshold, and that computer systems
are extremely complex. Every bit of openness -and the verifiability such
openness affords- brings us closer to that ideal secure system.

------
d--b
Remember heartbleed... While open cpu is a great idea, it's hard to say that
an open cpu would have avoided the meltdown or spectre problem.

The meltdown/spectre vulnerability was there not because people didn't pay
attention, it's because nobody thought of these optimization features as
potential vulnerabilities. And honestly it's hard to blame anyone, these were
really smart hacks!

~~~
allemagne
The article does mention this:

>Finally, even if we end up with entirely open processors, that will not bring
an end to vulnerabilities at that level ... Open hardware may give us more
confidence in the long term that we can retain control of our systems, but it
is certainly not a magic wand that will wave our problems away.

The author is arguing more from a freedom/control perspective from what I
understand. When it comes to these kinds of vulnerabilities, an open CPU might
be easier to patch or easier to disable the affected components of.

Apart from that, maybe crowd-sourcing and open-sourcing the eventual fix is
faster or better than a private org. That doesn't necessarily seem like
something the author is saying, but it seems like a reasonable factor to
consider. I can see arguments either way that I'm not really qualified to
defend.

------
debatem1
Past time, but progress on the riscv front is amazing. If prototype-scale fab
got cheaper I expect we'd see a wave of parts focused on security and
auditablity in short order.

~~~
sddfd
If one kind of RISC-V chips would become the go-to CPU for small smart
appliances, demand could rise enough for a fab to pick up mass production.

~~~
debatem1
The problem isn't that you can't fab for mass production, it's that
development costs are very high to get there. Working on an fpga only gets you
so far.

~~~
mickronome
I was under the impression that for process nodes you would reasonably use for
small smart appliances the fab cost is too high, and certainly not high enough
that nobody would pick up the tab if the gadget makers would start to require
open IP for their IoT / smart appliance offerings.

There are, and have been a few CPU/MCU upstarts that have actually done
shorter runs of silicon, and while they are usually quite well funded, the
numbers doesn't seem to be prohibitive at all.

If the need was there for it NOW, fabbing something with existing open ip is
quite likely to be 'free money' (or free market share), as the semiconductor
industry appears to be entrenched with ip licensing so much that I sincerely
doubt that they would be quick to follow suit. If only because any open
offering from them would be seen as detrimental to market dominance, and lower
the value of ip cross licensing deals.

Unfortunately I don't think the market need is there, and won't be for quite
some time. I would love to be wrong though.

~~~
debatem1
Not sure I follow.

Shuttle service costs for small runs of silicon are, for most people in most
situations, tens of thousands of dollars at a minimum. My argument is that if
the price dropped to the point where it was competitive with FPGAs at
prototype scale (eg, 10 parts might cost $1000 but not $10000) then you would
see parts on the market built that way quite quickly. I think the proof for
that is the number of low volume commercial parts (HSMs, LTE base stations,
etc) which carry FPGAs today. Seems the market has already spoken?

------
tombert
I know that there are probably a billion technical hurdles for this, but in my
perfect universe, CPUs would all be FPGA based; at that point, couldn't
updates be almost as simple as a software update?

I know that laws of physics kind of preclude this idea, but a guy can dream.

~~~
skummetmaelk
Then you can increase the power consumption of all electronics by 50+% and
halve the processing power. And that is optimistic.

~~~
awelkie
I appreciate what you're saying, but for devices with long lifetimes things
may not be so clear. Consider a wifi device with a >15 year lifetime. Let's
say such a device was built with SDR and FPGA, it conformed to 802.11b, and it
used twice the power of an equivalent ASIC design. But 15 years later, this
device was able to upgrade in software to 802.11ac, and so it was able to take
advantage of spatial multiplexing and higher-order modulation to lower its
power usage. So now this device uses twice as much power as an equivalent
802.11ac ASIC, but maybe it's more power efficient than that 802.11b ASIC from
15 years earlier.

I guess what I'm saying is that by allowing a device to upgrade in software to
more power-efficient designs, you might claw back some of the efficiency lost
by using an FPGA when you consider the entire product's lifetime.

~~~
RandomOpinion
> _I guess what I 'm saying is that by allowing a device to upgrade in
> software to more power-efficient designs, you might claw back some of the
> efficiency lost by using an FPGA when you consider the entire product's
> lifetime._

That presumes you put in a big enough FPGA/CPU or whatever fifteen(!) years in
advance that has enough resources to handle the increased processing
requirements for a future protocol (expensive and wasteful and there's no
guarantee that you didn't guess wrong and it's still too small/slow) and that
your RF signal path was designed well enough to handle the new signal
requirements (expensive and wasteful even if you were prescient enough to
guess what future requirements were). And that's on top of the fact that
inexpensive electronic devices simply aren't built with components rated to
last 15 years.

tl;dr software doesn't change the laws of physics

------
ntoll
The [http://www.lowrisc.org/](http://www.lowrisc.org/) project are doing this
as a SoC (disclosure: one of the founders is a buddy). Risc-V based and fully
open source.

------
lnsru
Hardware is whole completely domain as software. Sadly one cannot throw a
hardware bug fix to github and expect mass adoption next day. Hardware cycles
are way longer and the associated cost (take alone mask set!) is too high. How
many users would be capable of reviewing these millions lines verilog code?
Integrated circuit openess is an utopia.

------
zaxomi
Maybe it is time to rethink how computers are built? For example, that the
kernel is running on a dedicated cpu, with dedicated cache and memory.

~~~
walkingolof
I think the problem have more to do with software, CPU's are just built to
meet the demands.

Today most software can only scale if you add a faster processor, that has to
stop, before that happens, there is not much the processor manufacturer can
do, their hands are tied IMO

~~~
fmap
I think the problems have more to do with economics. Creating new modern CPUs
requires a lot of capital investment, making CPU vendors more risk averse.
That's why modern CPUs by and large are not built to meet the demands of
future software - they're built to meet the demands of Excel 97 (exaggerating
slightly for effect).

The programming interface of a modern x86 CPU is best thought of as a virtual
machine with a JIT. The JIT performs some optimizations to make old code run
more in parallel without the need for recompilation. This is a terrible model
for compilers and programmers, since it makes optimizations hit or miss, and
of course it's why we're in this whole spectre/meltdown mess at the moment. If
we switched to a more reasonable programming model which actually met the
needs of software (fine grained interprocessor communication without going
through central memory, many more registers, actual software access to
pipelines, caches, etc.) then it would make a lot of old software run slower
(through emulation), but new software could finally make better use of your
hardware...

~~~
andars
> more CPU vendors more risk averse

Seeing how Itanium went, their risk aversion seems pretty wise.

Itanium seems to satisfy most of your desires with its VLIW/EPIC architecture,
which exposes much more to the compiler.

~~~
deepnotderp
The Itanic really doesn't seem fair as a reason to be un-innovative. It did
okay in HPC and Bob Rau's untimely death helped hamper it.

------
cm2187
If we see further fragmentation of cpu architectures it's not going to be fun
for users and developers. Already the fragmentation of OS makes that cross
platform is a big challenge. It would make the equivalent of a
.net/java/webassembly/posix critical.

~~~
mpweiher
> further fragmentation of cpu architectures

"Fragmentation"??

We have a huge _monoculture_ of Intel x86/x64 on the desktop, driven by
Intel's fab advantage that smothered all alternative (and often superior)
architectures. And recently, a second one has sprung up with ARM on mobile.

And it hasn't really been good for the industry.

Fortunately, the Windows monoculture in OSes has been broken, so much that MS
wasn't able to extend it to mobile and has been driven to give away their OS
updates.

In terms of porting, CPU architecture is far less of a problem than you might
think. Tiny NeXT managed to make its OS available for four CPU architectures:
68K, i386, SPARC and HP-PA, with apparently PPC and MC88K available in-house.
Write Once, Run Anywhere™, all with native binaries, no intermediate layer
needed.

And these were processors with different endian-ness, RISC, CISC, sliding
register windows, sparse register sets etc.

~~~
em3rgent0rdr
ISA fragmentation isn't much a worry for people who only run open source
software. Debian is especially good at maintaining ports of tens of thousands
of packages for 10 ISAs [1].

[1] [https://www.debian.org/ports/](https://www.debian.org/ports/)

~~~
em3rgent0rdr
And they still have unofficial ports of 6 other outdated ISAs complete with
Debian 9.0 net install ISOs [1].

[1]
[http://cdimage.debian.org/cdimage/ports/](http://cdimage.debian.org/cdimage/ports/)

------
gravypod
I'm interested in the possibility of eventually having FPGA-driven systems
where you can load up an ISA and use that. Imagine if every week you could
expect an ISA architecture update that came with a power usage or speed
improvement.

~~~
ladberg
The problem is that no FPGA-driven system will come close to beating out a
modern CPU, so people will have very little incentive to actually use it
because it will probably be more expensive and definitely slower.

~~~
gravypod
The same thing could be said for any new open CPU. It would be slow and
expensive. Unlike a normal CPU though your FPGA-system could be programmed to
run your main CPU, PCIe Bus, USB Bus, video card, etc and it could be updated
to add features and run faster.

------
ece
Somebody needs to take the risks of designing, making, and selling a RISC-V
processor/chipset and associated peripherals. Frankly, someone like a
Broadcom, who has the expertise, and wants a royalty free rpi, might just work
fine. I can't immediately imagine anyone else big enough doing it without
upsetting the market too much. Sure, an open foundation could do it if they
raised enough cash, but it seems unlikely.

Having the backing of industry/academia/hobbyists is no small feat, and I hope
RISC-V can keep pushing further into the consumer market.

------
HwdSlle
perhaps to process the information it is necessary to visualize the phantom
environment where all information is transmitted in order to manage and
construct as part of the processing the so-called reverse engineering;

understanding the electricity and visualizing the magnetism and diverse waves
and its interactivities; in order to detect as to the example of a bird
landing on the high voltage wires and how the properties of the particles can
be read as well as stored in plants etc. and how the materials are properly
constructed in order to distort the assimilation necessary to start the
machine ;

and then destroy the school logic as to algebra titles etc, in which it is
part after the experiments to start organizing circuit logic; maybe something
about the quality of the electricity by improving the components including
rust and dusting etc. perhaps thus by sealing the components in the process of
initializing the machine to verify the veracity of the circuits; some degree
of upgradeable serial number in order to connect other parts, perhaps
identifying a first problem of logic; \- and perhaps the idealization of
unique components like the Raspberry system-board? where volatile parts is
taken as user Hard-Disk content being the logic of software?

so better focusing on the complexity required than the ultra-fast Quantum
Computer processor? can be produced at higher levels for virtual realities;
[?] Just Guess;

------
dschuetz
Of course it is! What is this question? No, man, not yet. Everyone has a thing
just for the _closed_ ones. Instead of asking "Is it time...?" you just do it
when it's time! Damn those click baits. Although, _open_ as in OpenSSL made a
bad name for open source, because without critical audits there is even less
security in _open_ , because then it might be _wide open_ instead! In open
source, when a thing is made and it works for the most, nobody freakin cares
anymore!

~~~
jasonkostempski
"What is this question?"

They're not asking if it would be nice to have, they're asking if it can be
economically feasible now. If there isn't going to be a very large use case
for open but inferior processors, a lot of money will be lost by anyone
investing. That isn't a problem with software.

------
agumonkey
as someone mentions in the comments, parallella founder made a paper about the
process of bringing open fpga to the market

[https://lwn.net/Articles/743716/](https://lwn.net/Articles/743716/)

direct link [https://www.parallella.org/wp-
content/uploads/2017/01/hipeac...](https://www.parallella.org/wp-
content/uploads/2017/01/hipeac_lessons.pdf)

------
robert_muench
Well, the idea is about 20 years old:

[https://web.archive.org/web/19991128065121/http://openip.org...](https://web.archive.org/web/19991128065121/http://openip.org:80/oc/index.html)

However, CPUs are that complex these days and require a lot of equipment
around it to be produced. The way is IMO much harder to open IP than to open
Source.

------
thinkloop
Another important piece is being able to verify that the manufactured hardware
meets the open specs they claim to (and wasn't tampered with, or contain
errors). Something like an md5 check for hardware.

With open software you can read the code, compile it and run it << zero trust.
With hardware you can read the spec, but must trust the implementation.

------
walkingolof
IMO the problem is on the software side, if software was built to harness a
distributed architecture the CPU companies job would be much easier, so more
could do it, which would increase competition and drive price cuts.

------
StreamBright
I don't see how an open processor would any better than a closed one.

~~~
dragontamer
Do we really know that AMD is immune to Meltdown aside from trusting them?

I mean, I reasonably trust AMD here. I don't see any reason why they'd lie
about it. Plus, everyone in AMD seems super-confident that they're immune to
the Meltdown issue due to how they implement speculative cache loads.

But such analysis is purely within AMD's circle of engineers. None of us
outside of AMD can verify their claims.

~~~
StreamBright
Do you really trust Airbus and Boeing with your life when you don't have all
the blueprints of their planes?

~~~
dragontamer
Are you suggesting that we open up a government agency dedicated to the
testing of CPU chips, and enforcing strict regulations?

The FAA and its set of safety regulations are written into US law. That's why
airplanes are trustworthy and safe: more safe than other countries on the
average.

[https://www.faa.gov/aircraft/safety/](https://www.faa.gov/aircraft/safety/)
[https://www.faa.gov/regulations_policies/handbooks_manuals/a...](https://www.faa.gov/regulations_policies/handbooks_manuals/aircraft/amt_airframe_handbook/)

Or do you think the US just MAGICALLY gets high safety results through sheer
determination? There are systems in place, and I think it is reasonable to
suggest that we should start building a system for computer-chip processors.

Open Source is one possible proposal. I'm not sure what other proposals exist
to ensure that the internals of chips aren't compromised.

~~~
StreamBright
I am suggesting that other engineering disciplines produce much safer products
without opensource and open design. How it is achieved is another question.
Bridges are not safe because of any government agency but because mechanical
engineers design them to be safe.

~~~
dragontamer
> Bridges are not safe because of any government agency

Building a bridge is one of the most heavily regulated things in the USA.

For details, see:
[https://www.fhwa.dot.gov/bridge/nbis.cfm](https://www.fhwa.dot.gov/bridge/nbis.cfm)

Every single bridge in the USA is documented, listed, inspected and regulated.
There's even an entire division of the US Military that helps out: The Army
Corps of Engineers. So its multi-agency, multi-department, and even includes
military service members.

Bridge Building might be the only thing more regulated than Airline safety in
this country.

> because mechanical engineers design them to be safe.

Civil Engineer btw. Not Mechanical. And Civil Engineers need to pass a strict
level of training and licensing to practice in the USA. Granted, this is on a
State-by-State basis rather that on a national basis (like Bridge-building).

But its still a government regulation, even if its State-wide instead of
Federal.

[https://www.nspe.org/resources/licensure/how-get-
licensed](https://www.nspe.org/resources/licensure/how-get-licensed)

All 50 states require 4-years of work experience before you earn the title of
"Professional Engineer". And that's the title you need before you can lead
something like a Bridge-building project.

Not only are bridges highly regulated, but the people who are allowed to build
bridges are highly regulated, at both the State and Federal levels.

\----------

Now I'm not saying we enforce the highest regulation standards upon our CPU
manufacturers. I'm basically saying "Stop coming up with bad examples". The
incredibly safe examples you have brought up so far (Airline safety and Bridge
Building) are the result of years of Government regulation and laws.

~~~
StreamBright
It seems you are really keen on government regulations. You can regulate as
much as you want but if we can't build a safe bridge because we don't know the
law of physics than legal considerations are pointless.

~~~
dragontamer
Yes. Which is why there are government regulations saying "Professional
Engineers" (a regulated title btw) must know a certain level of physics before
they are allowed to build a bridge.

~~~
pjmlp
Which is also a thing in countries where software engineering is a 3 to 5 year
degree, and not a made up title.

------
andyidsinga
re "solution to the hardware problem" in post:

a glance at opencores.org shows some interesting prospects under the
processors and system on chip categories.

ex. \-
[https://opencores.org/project,rv01_riscv_core](https://opencores.org/project,rv01_riscv_core)
\-
[https://opencores.org/project,riscv_vhdl](https://opencores.org/project,riscv_vhdl)

------
randomerr
I would love to play with a RISC-V unit. But like one they linked too it was
limited on memory and very expensive compared to other modern Aurdino
compatible boards.

~~~
jasonkostempski
If you're really just looking to have fun and learn, $60 is nothing. You can
burn through that at Dave & Busters in 15 minutes playing games. They also say
it's the "fastest Arduino-Compatible dev kit" but I don't know enough about
the space to know how accurate that is.

------
jlebrech
with open you could have mainboard with one single chip and you then pop in a
SOC that matches the IO you have.

open also means making the software that creates the chips design open too,
otherwise you'll just get leapfrogged by closed designs.

This could overcome being a few generations behind on fab tech, just have the
exact silicon you need.

you could also design open mainboard design with multiple zif sockets, this
would be very flexible, and maybe futureproof.

------
0xFFC
It is a little bit unrelated, but I am really eager to know HN opinion about
SystemC. What do you guys think about SystemC?

------
laythea
Are you kidding? How many undocumented processor instructions and undocumented
processor operating systems do what with your data. Imagine a world with
transparent CPUs and you are imagining a world without mass data collection.

------
NedIsakoff
What about your network device? Your I/O device?

------
slivym
There are just so many issues with this it's almost laughable to be honest.

Firstly, let's assume you basically need a big company to opensource their
existing IP because opensource is no where near where Intel is today. So let's
look at the obvious choice: Intel has traditionally perceived itself to be a
manufacturing company. It tries to produce as many chip widgets as possible
and producing CPUs or SSDs or network chips is just their way of finding ways
to sell their widgets. The core of their business is fabrication.

Now, the only companies they let use their foundries to produce other chips
are almost exclusively companies that they end up buying (I'm looking at you
Altera).

By open sourcing their CPU designs they are practically guaranteeing that some
of the chips they produce today would be produced by Samsung and TSMC. Try
explaining to their share holders that you're giving away the crown jewels,
damaging the core of the business and hoping for what....?

Secondly, taping out a chip is _hundreds of thousands of dollars_. Even if you
did opensource everything, the only people who could actually produce the chip
would be huge corporations, and the process of taping out isn't "Let's save it
to a floppy and send it off". All of the tape out process requires proprietary
tools from the fab, which are only available to the serious customers. So your
source might be open but no one outside the company would be able to actually
read them. And this is non-trivial, TSMC consider their tools to be a core
part of the IP that differentiates them, they'll never release them under
anything but a very strict contract to a client.

Finally, one of the key benefits to opensource is that you can fix something
yourself, you can contribute and change things. Well not in hardware. The best
you can manage is to fix the problem in the next $X00k production run.

Laying all that aside: No one has actually described what benefits Open Source
software has over closed source that we're trying to map to hardware? To me a
key benefit of Open Source is that we don't depend on corporations as
gatekeepers to fix things, well that isn't solved in hardware open source.

~~~
relate
Could't they open source the design/implementation without a permissive
licence? I.e. everyone could audit, but nobody could use it commercially.

~~~
quadrangle
That would be non-free / non-open to be clear. A copyleft license is non-
permissive but is still free/open. Just want to be clear, since some people
mistakenly think things like that commercial use requires permissive
licensing.

------
adinobro
We already have some. I know the Arduino CPU is open source (slow) and there
are open source MIPS CPUs but again they are slower than ARM and X86 CPUs.

The problem is people don't want slow CPUs.

~~~
detaro
> _I know the Arduino CPU is open source_

It's not. The board layout of the Arduino boards is open-source, as is the
software around them, but the microcontroller on them is not.

------
quantummkv
Open Processors is a very bad idea. Look at what's happening to Android. The
same thing will happen with CPU's. Every OEM will fork the open design and put
all of their stupidity inside it in the name of features and security. Bugs
like Spectre and Meltdown will become commonplace.

The entire time of kernel devs will be spent working around the various
'features' of the OEM designs. Then someone will come up with a JVM and docker
for the CPU and lead to another round of madness.

There is no harm in having alternate architectures. But putting everything up
on GitHub is a recipe for disaster.

~~~
Lxr
Why doesn’t this happen with linux, in your opinion? Is it the license?

~~~
quantummkv
This does happen in Linux till some extent. Look at wayland for example.
Broken on nvidia hardware because nvidia favors its own implementation.

The reason why this does not happen with Linux kernel is because Redhat and
Canonical go to great lengths to backport kernel patches, which by proxy
affect all the distos based on them. Also because Linux desktop is not big
enough for OEM's of the majority of PC users to care about.

Android is a flavour of linux that went big. The fact that OEM's can clone the
kernel and put their own drivers into it without being forced to update is
what causes the most grief in Android world.

