
Darpa invests $100M in a silicon compiler - adapteva
https://www.eetimes.com/document.asp?doc_id=1333422
======
ur-whale
"Most importantly, we have to change the culture of hardware design. Today, we
don’t have open sharing … "

This, to the 100th power.

The culture in the EDA industry is stuck in the 1950's when it comes to
collaboration and sharing, it's very frustrating for newcomers and people who
want to learn the trade.

As was pointed out by someone in another hardware related HN thread, what can
you expect from an industry that is still stuck calling a component
"Intellectual Property"?

The un-sharing is built into the very _names_ used to describe things.

~~~
WalterBright
I designed the ABEL language back in the 80's for compiling designs targeted
at programmable logic arrays and gate arrays. It was very successful, but it
died after a decade or so.

It'd probably be around today and up to date if it was open source. A shame it
isn't. I don't even know who owns the rights to it these days, or if whoever
owns it even knows they have the rights to it, due to spinoffs and mergers.

~~~
seattleeng
I think you'd be surprised. I went to Caltech (graduated 2014), which is a
fairly well known university for their Electrical Engineering program, and I
learned ABEL in my Sophomore/Junior year. My instructor, an admittedly old
school hardware engineer, was in love with the language and had it as part of
our upper level digital design curriculum for a few labs. FWIW, I think it was
super intuitive and a hugely valuable learning tool. I suppose that doesn't
mean it isn't "dead" for professional purposes, though.

~~~
WalterBright
Thanks very much for the kind words!

Perhaps you can ask the professor for who has the copyright on ABEL now? So we
can ask the holder if it can be open sourced.

Miracles like this do happen - last year Symantec allowed the Symantec C++
compiler to be fully open sourced!

~~~
ptrott2017
Walter - appears Xilinx are the current copyright holder and ABEL was last
supported in the XILINX 10.1 ISE toolset released circa 2008 (Current release
is 14.7). Introductory guide can still be found here:
[https://bit.ly/2NfkLWq](https://bit.ly/2NfkLWq)

~~~
rch
This is the URL behind the shortened link:

[https://www.xilinx.com/support/documentation/sw_manuals/xili...](https://www.xilinx.com/support/documentation/sw_manuals/xilinx10/help/iseguide/mergedProjects/abelref/whnjs.htm)

~~~
WalterBright
Thanks, I'll contact them and see what they have to say.

------
wcrichton
My advisor at Stanford is working on an open-source hardware toolchain to
solve these exact problems. The Agile Hardware Center is trying to bring
software methodologies of rapid prototyping and pervasive code sharing/reuse
to ASICs, CGRAs, and FPGAs:
[https://aha.stanford.edu/](https://aha.stanford.edu/)

~~~
mips_avatar
Any tips on getting started with FPGA on custom pcbs?

~~~
civilitty
What exactly do you mean by "getting started with FPGA on custom PCBs?" Have
you made custom PCBs with high density BGAs before? Most non-trivial FPGAs (to
me that means you can easily fit a decent softcore processor with space left
over for your FPGA logic) will be ball grid arrays and almost impossible to
DIY without xray inspection equipment and a reflow oven. You can get what you
need for a few hundred $ on eBay if you're patient and lucky but even then,
getting to the point where you can solder the chips reliably will take time.

If you have the budget for professional assembly, then I would start with a
Digilent product that has an available reference design you can copy later.
First get used to the FPGA and how it works (they are largely incomparable to
a CPU or GPU except for the fact that they both have transistors). Then,
design a simpler board with a smaller FPGA and work your way up to the big
500-1k pin chips.

~~~
slededit
You don't need XRay and your reflow oven can be a toaster oven. I do recommend
a cheap microscope and a good pair of tweezers though. With this you can do
0.5mm pitch BGA although that is pushing it. I've made hundreds of prototypes
this way.

The fear of BGA parts is seriously overblown. The only real expensive part is
the finer pitch parts will require tighter tolerances on the PCB which will
take you out of the PCB batch services price tier.

edit: toaster oven, not toaster

~~~
jacquesm
Your comment reads as if you equate 'BGA' with 'SMD', a BGA is a _ball grid
array_ with up to 1,000 tiny pads that have been pre-dipped in solder. Your
toaster isn't going to work.

~~~
slededit
BGA is actually easier than leaded SMD parts. The tiny leads tend to bridge
easily. With BGA you can be up to half the pitch off and it will center
itself. I actually only go for leadless and BGA now because anything else is
more of a hassle.

It is a bit rude to assume I don't know what a BGA part is.

~~~
jacquesm
The tweezers bit is what triggered that. Handling a large BGA with tweezers is
going to scratch the PCB if you're not ultra careful.

Anyway, if you do this stuff often enough then I see no reason why you
wouldn't get the proper tools, a rework station and an actual reflow oven or
something with a PID controlled heating element would make your life so much
easier. Working with bad tools would drive me nuts.

The reason the larger BGA center themselves is as soon as the solder goes
fluid there is a lot of accumulated surface tension trying to reduce the size
of the bridge and that will center the part all by itself. For that to work
properly though everything has to become fluid more or less at once and stay
fluid until the part has shifted to the right position.

~~~
slededit
The tweezers are for the 0802 passives and nudging the FPGA itself. It takes
forever but then so does programming the pick and place for a one-off.

As for the reflow you can actually get more consistent results with the
toaster oven - It just won't be able to handle the volume of actual
production. Whatever you do just don't try going "semi-pro" and getting one of
those IR ovens from China. Stick with the $40 walmart special. The toaster
oven when heated slowly is much less likely to have hot and cold spots.
Stenciling and placing parts take up a lot more time and are much more error
prone.

------
whaaswijk
I just got back from the Design Automation Conference in San Francisco. It is
one of the major EDA conferences. Andreas Olofsson gave a talk about the
silicon compiler. There was serious discussion about open source EDA. As far
as I could tell it is still unclear what the role of academia will be. It
seems tricky to align academic incentives with the implementation, and most
importantly, maintenance of an open source EDA stack. However, there is quite
some buzz and people are enthused. A first workshop, the "Workshop on Open-
Source EDA Technology" (WOSET) has been organized.

I also thought I'd try to answer some questions that I've seen in the
comments. Disclaimer: as a lowly PhD student I am only privy to some
information. I'm answering to the best of my knowledge.

1) As mentioned by hardwarefriend, synthesis tools are standard in ASIC/FPGA
design flows. However, chip design currently often still takes a lot of manual
work and/or stitching together of tools. The main goal of the compiler is to
create a push-button solution. Designing a new chip should be as simple as
cloning a design from GitHub and calling "make" on the silicon compiler.

2) Related to (1). The focus is on automation rather than performance. We are
okay with sacrificing performance as long as compiler users don't have to deal
with individual build steps.

3) There should be support for both digital, analog, and mixed-signal designs.

4) Rest assured that people are aware of yosys and related tools. In fact,
Clifford was present at the event :-) Other (academic) open source EDA tools
include the ABC for logic synthesis & verification, the EPFL logic synthesis
libraries (disclaimer: co-author), and Rsyn for physicial design. There are
many others, I'm certainly not familiar with all of them. Compiling a library
of available open source tools is part of the project.

Edit: to be clear, WOSET has been planned, but will be held in November.
Submissions are open until August 15.

~~~
eternauta3k
Do you have a link to WOSET? Googling doesn't turn up anything.

~~~
whaaswijk
The website isn’t up yet, but it’s organized by Prof. Sherief Reda at Brown
University. You can contact him for inquiries.

------
adrianmonk
So it costs $500 million every time someone designs a SoC and (before now)
nobody has spent $100 million trying to make that more efficient?

~~~
xt00
I think the primary focus / value here would be if they can somehow
dramatically reduce the cost of making masks and IC's. Lets say these
researchers make the actual process of converting C code to silicon super
easy, then you go to make the chip and they are like, "cool, the mask / fab
cost is like 500k USD for samples" \-- then basically the exact same people
who currently make chips will keep making chips. What would be awesome would
be if DARPA funded somebody converting an old 90nm fab into a low cost foundry
that was basically fully automated and subsidized it to allow people to make a
chip design for $1000 USD, then you would have a flood of people just being
like, "well, its $1000 bucks, why the hell not try to make this chip even if
its wrong.." most businesses would happily roll the dice on stuff for that
kind of price, and some individuals would as well.

~~~
namibj
Uh, for 90nm you should be in the 4-digit range for a hand full of prototypes.

~~~
xt00
Oh yea I was totally exaggerating since obviously the prices vary tremendously
upon the what you are trying to do. I’ve previously worked for a semiconductor
company, so there is sort of a “you can pay as much as you want” option always
avail if you want a super awesome mixed signal chip. For the general public,
do you have a particular place you are aware of that you would pay the 4
digits for the mask plus few prototypes? I was not aware that there was even
simple chips you could get in that range.. I mean 4 digits implies what like
say 5k... so that’s pretty low cost.... really?

~~~
kragen
Both CMP and MOSIS have a number of options that come in under US$10k for a
handful of prototypes. Large process nodes, so they won't be competitive for
digital stuff, but for analog or mostly analog mixed-signal, they might
actually be able to beat anything you can buy off the shelf. Haven't tried it
yet myself, though.

------
437598735
On the opposite side of the spectrum, there's Chuck Moore (Forth creator) who
in trying to find the simplest combination of software and hardware for his
projects devoted a lot of time into a DIY VLSI CAD system. Fascinating history
behind it, although the actual OKAD system is essentially trade secret for his
company.

His site has been down for a while, but someone thankfully mirrored most of
the pages here:
[https://colorforth.github.io/vlsi.html](https://colorforth.github.io/vlsi.html)

More history about OKAD, plus links to more about Forth both software and
hardware:
[http://www.ultratechnology.com/okad.htm](http://www.ultratechnology.com/okad.htm)

------
JumpCrisscross
Side note: when people complain about the military budget, projects like these
should be noted. Political reality in America, today, is military R&D and jobs
programs are easier to fund than civilian ones; so that’s where projects go to
live.

~~~
mmiller9
I sincerely doubt our enormously bloated defense budget is because of
research...

~~~
aphextron
Guns and bombs are cheap. Everyone has plenty of them. The power of our
military is in force prjoection; that is, the ability to use those guns and
bombs anywhere at any time.

That requires massive, sustained logistics spending. To the point it may seem
absurd and wasteful. But the alternative is fighting a fair fight on equal
footing. I’d rather not do that.

~~~
GW150914
Guns and bombs are cheap. The Zumwalt, F-35, B-2, and more are not cheap. I
would point out that “smart” bombs and cruise missiles are not cheap, and the
cost really starts to add up. Nuclear weapons research, production, and
maintenance isn’t cheap.

So really it’s fair to say that guns and bombs aren’t cheap, but they are also
a distraction from the delivery systems, which are catastrophically expensive.

 _the point it may seem absurd and wasteful. But the alternative is fighting a
fair fight on equal footing. I’d rather not do that._

Yeah? Are you sure the alternative isn’t to spend 3 or 4 or 5 times as much as
any other nation instead of 7 times as much? Maybe the alternative is to avoid
boondoggles, and focus on proven tech. Of course the _real_ alternative is
that doing so would get in the way of the real business of arms dealing.

~~~
aphextron
>The Zumwalt, F-35, B-2, and more are not cheap.

You're right, they sure aren't. But as a result no other military force on
earth besides NATO has the capability to launch air superiority fighters from
amphibious assault ships, and perform multi-ton circumglobal bombing sorties.
That kind of capability doesn't come cheap, and shouldn't be dismissed.

Take away the B2 and we are fighting to-to-toe with Russian/Chinese long range
bombers

Without the F-35 we are on equal footing with Chinese/Russian carrier based
aircraft

Without Zumwalt class ships, we are going head on against Chinese missile
destroyers and subs of equal capability.

The whole point is that you don't want an even remotely fair fight. "Keeping
up" with others's spending, even within an order of magnitude, is a really bad
idea if you can help it.

~~~
vkou
If the US military is, in any serious way, going toe-to-toe with
Russian/Chinese bombers, I hope you've got your fallout shelter stocked,
because as likely as not, both sides in the conflict will cease to exist ~20
minutes after that point in time.

~~~
v_lisivka
Ukraine is still alive after 4 years of war with Russia. Relax.

~~~
GW150914
They’re at war with Russia because they gave up their nuclear weapons. No one
is invading a nuclear power, which I think is the point the post you’re
responding to was making.

~~~
v_lisivka
How you know that Ukraine still doesn't have nuclear weapons?

~~~
vkou
Because I assume that the Russians, when they armed it (In Soviet times), and
then disarmed it (In post-Soviet times), could count up to, and back down from
twenty.

If they still have weapons, they sure as hell haven't been using their
existence as a deterrent. The whole _point_ of having nukes is letting
potential aggressors know that you have them, and that, if attacked, you may
be crazy enough to use them.

~~~
v_lisivka
> The whole point of having nukes is letting potential aggressors know that
> you have them, and that, if attacked, you may be crazy enough to use them.

You must know that Ukraine was attacked 4 years ago. So the "whole point" is
not applicable in this case.

------
dalbasal
I have questions, if anyone knows something about hardware. What would a
"silicon compiler" let one do? What exactly gets easier/cheaper and what
exactly could new chip designs yield?

~~~
hardwarefriend
It's difficult for me to be sure, since the article makes it sound as if
they're attempting something novel, but synthesis tools are standard in
ASIC/FPGA design flows.

Currently, the best synthesis tools are closed-source and extremely expensive.
Imagine the benefit of having gcc/clang be free software. That is the kind of
effect that is at stake here.

Usually, hardware designers will write RTL code (Verilog/VHDL) which describes
the hardware slightly above the gate level. In order to turn this description
into a web of logic gates (called a netlist), the design is processed by a
synthesizing program. The produced netlist describes exactly how many AND,
NAND, OR, etc. gates are used and how they're connected, but it doesn't
actually describe where the gates are placed on the chip or the route the
interconnections take to connect the gates. To generate that info, the netlist
is fed into another synthesis tool (usually called place and route).

This is a simplified version, but even at this level of detail, there are
important factors affecting chips. \- How many gates? (less might be better)
\- How far are the gates from each other? (closer is better; less power, area,
cost, timing) \- How often will the gates switch? (less is better) \- More....

More advanced synthesis tools improve area, cost, power, timing. They also
allow designers to have less expertise and still obtain the same result as
experienced designers by optimizing out micro-level inefficiencies in the
design (though experienced designers will also lean on the synthesis tool).

~~~
civilitty
To expand: compiling silicon isn't like compiling code even though it does use
a hardware description language. Not only do all parts of your "code" all run
at once, but you're also laying out physical transistors and mapping their
connections. This is fundamentally an NP complete traveling salesman problem
with an absurd exponential explosion of complexity - aka how do you route
thousands or millions of connections that can't intersect on a 2D plane while
still doing what the code describes. Oh and the fun part: unless you're
careful, a change in a completely unrelated bit of code could break almost
anything in the system by making it impossible to route connections without
screwing up the timing of all the little bits.

There is no type checker that can deduce whether your design will work or not
and then output machine language. At the end of the day, with silicon you have
to actually figure out whether your design can be physically manufactured by
running a long compiler process and then testing it, often with rigorous
simulations before moving to the fab process.

------
ThinkBeat
I don't quite get the open source angle in the comments here.

If I managed to get my grubby hands on a moderately modern computer, I can use
all manner of open source software and I can create wonderful new software.
The barrier of entry is fairly low in rich countries.

If AMD open sourced all the design aspects of their chips, I would have to get
a loan to build 100 million fab to have any practical manner to enjoy it?

I can see that if Intel/AMD/NVidia/Apple shared all aspects fo their chips
cross pollination might bring great things, and academic research would be
boosted and might end up giving back more to the community at large, but you
are talking about very few entities across the world that can afford fabs.

~~~
ur-whale
I believe you have it backwards: this is exactly why there is actually very
little risk for the big guys to actually share their knowledge.

The financial entry barrier for building a fab is so huge that what in
heaven's name would intel lose if they published the RTL for - say - their
integer division hardware to show and teach the whole world how it's done when
real professionals take a stab at it?

And if they're scared AMD might copy their integer division, why not publish
the Verilog code from 2 or 3 generation old h/w? (and this is probably a bad
example, I believe AMD and Intel are essentially done competing on stuff like
that).

But what I am talking about here is basically _unthinkable_ given the current
culture in the EDA world: a person suggesting this inside one of the big shops
would be committing career suicice.

Conversely, if you navigate EDA discussion boards a little bit, there is no
end to the snarky or sometimes downright insulting comments made by big shop
insiders about how lame and terribly inefficient the open source hardware
designs published on the net actually are.

In other words: mocking outsiders are their ignorance instead of teaching them
how to do cool stuff. That's the culture of the EDA world. Time for a change.

~~~
et2o
What are the EDA discussion boards? Curious to peruse.

~~~
ur-whale
Here's one: [https://www.edaboard.com/](https://www.edaboard.com/)

------
esmi
What even is this project? There are no details on the DARPA page either.

Is it for PCB design, ASIC design or both? Is a constant current source also
considered a “small chip” or just digital designs?

Basically every EDA tool already has the ability to group sub modules which
one could distribute as open source if they chose.

Do it in kicad and put your circuit into a hierarchal symbol if you must be
all open source.

I get hard IP blocks from vendors all the time for inclusion in our ASICs.

It’s not the EDA tools that are preventing “openness”.

I was just joking the other day how all the PCB designs I’m reviewing lately
are just conglomerations of app note circuits and it’s really boring. So to me
it seems like there’s plenty of design reuse. :)

------
zik
I'm surprised to see no recognition of yosys, arachne-pnr and the icestorm
tools which together are a free and open source HDL tool chain which already
exists and is pretty widely used.

~~~
ur-whale
These two projects are exactly the road the EDA industry should be taking.

Unfortunately, they make _very_ slow progress because they have to
painstakingly reverse-engineer everything (with the possible exception of
Lattice stuff).

For Xilinx chips, where exactly _nothing_ is publicly documented at the lower
levels of the stack, they have to spend mountains of time re-discovering
everything.

Even if I deeply admire the effort and how far they've gotten, I can't help
but think: what a terrible waste of human talent and time.

Edit: I once asked a Xilinx employee why they didn't OpenSource their entire
software stack, because it struck me that they were in the chip manufacturing
business, and not in the toolchain business (a blisteringly obvious fact when
you look at the quality of such monstrosities as, e.g., Vivado), and that
OpenSourcing the tools would potentially enlarge their potential target market
by a large margin.

The culture is so broken in that space that I don't think he even actually
understood the question.

~~~
437598735
Note I don't work at Xilinx. And be prepared that what I'm about to write may
seem incredibly cynical, sorry.

But if I were to take a guess at why the culture is the way it is, I'd say
that it's because programmable logic is fundamentally relatively small logic
tiles replicated across large areas.

That means across competing companies there's a high chance for infringing
upon arsenals of patents for rather mundane things like interconnect, logic
families, or memory cell layout where there are only a handful of viable
alternatives yet the patent offices were likely duped into accepting multiple
legalese interpretations of the same underlying tech. It's a minefield.

Xilinx is not really a chip manufacturing business either. They're fabless.
Imagine having a company that designs RAM memory and outsources everything
beyond the cell design. If you don't own the foundry itself you're not going
to last very long unless you encrypt the memory access protocols, obfuscate
your (probably patent infringing) hardware architecture by layers of
undocumented tooling, and dominate the industry by buying up any upcoming
contenders while cross-licensing stuff to build up a complex ecosystem of
interdependent tools required to get even the most basic project done.

~~~
ur-whale
Thanks for that perspective, I had not considered the angle that the EDA
industry is scared of itself because of patents.

Once more, the problem can be traced to the root evil that is patents, sold to
the world as essential for innovation, but which end up having the exact
opposite effect.

------
alexbeloi
A great blog post here ([https://wp.josh.com/2017/10/23/adventures-in-
autorouting/](https://wp.josh.com/2017/10/23/adventures-in-autorouting/))
about some different auto-routing software.

~~~
analognoise
Those routes look terrible. Maybe if everything you're doing is electrically
short and there are no high speed routes, it would work great. Basically it's
wonderful if all you make are blinkenlight projects.

But if you have to input all of the data that makes up a good route (including
coupling, ground/power planes, trace length matching, PDN noise, stackup,
EMI/C rules, etc, etc), and then review the whole thing anyway, what's the
point of the autorouter?

Also the article says nothing about the various algorithms involved, which are
interesting from a computational geometry standpoint. But the gulf between
algorithm or academic example and "commercial router" is huge!

This blog post is just... not very good. It's how I imagine a software person
who made some stupid blinky LED thing thinks about hardware.

------
slededit
It should be noted that Andreas Olofsson used to run Adapteva and close to
singlehandedly designed the Parallella processor.

------
kenferry
This kind of spending is so foreign to me.

At $250,000 per year per person, that supports 100 people for four years.

I… suppose that's not completely insane?

~~~
garmaine
Fully loaded (inclusive of benefits, payroll tax, etc.). That’s actually not a
lot.

------
eleitl
Thought I'll see Olofsson in there.

~~~
archgoon
Sounds like you're familiar with this space and this guy; any thoughts you'd
like to share?

~~~
ingenieroariel
I'll bite but consider all I write poorly informed wild speculation.

The person he mentions has been behind a lot of advancements in both getting
cheap FPGA based boards to the hands of users and creating libre silicon-
proven IP. He recently left his company that was making Zedboards and
Parallella among other things to join DARPA.

The last big project he seemed to have worked on was a 1000-core processor of
which a lot was open source but a lot of the important bits were protected due
to likely an NDA with the factory or the provided of the tools they used.

It hope what he is going to work on (EDA) will finally enable DARPA and others
to fund truly open source designs and tools and work with fabrics that would
allows access to really cheap ASICs or FPGA based systems, building on all the
momentum around platforms like RISCV and his experience creating the Epiphany
cores.

~~~
ingenieroariel
OH! is an open-source library of hardware building blocks based on silicon
proven design practices at 0.35um to 28nm. The library is being used by
Adapteva in designing its next generation ASIC.

[https://github.com/parallella/oh](https://github.com/parallella/oh)

------
cottonseed
They should have given 1% of it to Clifford Wolf.

------
tlrobinson
I wonder if the decline of Moore’s Law will eventually lead to the
commotidization of ASIC fabrication?

Of course fabricating a chip will never be as cheap as writing a bit of
software, but maybe it will eventually be as cheap as, say, injection molding
a piece of plastic?

~~~
StringyBob
Heading the opposite direction - at least for the bleeding edge 7nm/5nm/3nm
ASICs you want in your next computer or smartphone.

Manufacturing costs (particularly fixed costs) are going up exponentially.
We're getting stuck on economics before physics. You need to be able to sell
10million+ parts to cover your costs.

There's more opportunities if you don't need the best performance or lowest
power and use an older manufacturing process node like 65nm.

------
Aeolus98
I might be able to weigh in here.

Having these tools as open source and freely available is a huge deal for so
many industries. I've worked with these tools at an academic level and now at
a startup, and it's amazing the magnitude of this enabling technology. Just
the tooling investment will be huge, making the core solvers and algorithms
more accessible should spawn a whole new wave of startups/research in
effectivley employing them. Just these days, I've heard of my friends building
theorem provers for EVM bytecode to formally check smart contracts to
eliminate bugs like these [0].

These synthesis tools roughly break down like this:

1\. Specify your "program"

\- In EDA tools, your program is specified in Verilog/VHDL and turns into a
netlist, the actual wiring of the gates together.

\- In 3D printers, your "program" is the CAD model, which can be represented
as a series of piecewise triple integrals

\- In some robots, your program is the set of goals you'd like to accomplish

In this stage, it's representation and user friendliness that is king. CAD
programs make intuitive sense, and have the expressive power to be able to
describe almost anything. Industrial tools will leverage this high-level
representation for a variety of uses, like in the CAD of an airplane, checking
if maintenance techs can physically reach every screw, or in EDA providing
enough information for simulation of the chip or high-level compilation
(Chisel)

2\. Restructure things until you get to a an NP-complete problem, ideally in
the form "Minimize cost subject to some constraints". The result of this
optimization can be used to construct a valid program in a lower-level
language.

\- In EDA, this problem looks like "minimize the silicon die area used and
layers used and power used subject to the timing requirements of the original
Verilog", where the low level representation is the physical realization of
the chip

\- In 3D printers it's something like "minimize time spent printing subject to
it being possible to print with the desired infill". Support generation and
other things can be rolled in to this to make it possible to print.

Here, fun pieces of software in this field of optimization are used; Things
like Clasp for Answer Set Programming, Gurobi/CPLEX for Mixed Integer
programming or Linear programs, SMT/SAT solvers like Z3 or CVC4 for formal
logic proving.

A lot of engineering work goes into these solvers, with domain specific
extensions driving a lot of progress[1]. We owe a substantial debt to the
researchers and industries that have developed solving strategies for these
problems, it makes up a significant amount of why we can have nice things,
from what frequencies your phone uses [2], to how the NBA decides to schedule
basketball games. This is the stuff that really helps to have as public
knowledge. The solvers at their base are quite good, but seeding them with the
right domain-specific heuristics makes so many classes of real-world problems
solvable.

3\. Extract your solution and generate code

\- I'm not sure what this looks like in EDA, my rough guess is a physical
layout or mask set with the proper fuckyness to account for the strange
effects at that small of a scale.

\- For 3D printers, this is the emitted G-code

\- For robots, it's a full motion plan that results in all goals being
completed in an efficient manner.

[0] [https://hackernoon.com/what-caused-the-latest-100-million-
et...](https://hackernoon.com/what-caused-the-latest-100-million-ethereum-bug-
and-a-detection-tool-for-similar-bugs-7b80f8ab7279?gi=e1d1a15e098a)

[1]
[https://slideplayer.com/slide/11885400/](https://slideplayer.com/slide/11885400/)

[2] [https://www.youtube.com/watch?v=Xz-
jNQnToA0&t=1s](https://www.youtube.com/watch?v=Xz-jNQnToA0&t=1s)

------
absurdmind
It would be great if they could make something like a Bluespec Verilog[0], but
open source. This HDL is far better than traditional ones, IMHO.

[0]
[https://en.m.wikipedia.org/wiki/Bluespec](https://en.m.wikipedia.org/wiki/Bluespec)

------
baybal2
I don't see how this is not literally the same thing any HDL synthesis program
does. How?

------
petra
I see many chip vendors on the list of participating companies.

Won't this project reduce barriers to entry for their industry ? and if so,
isn't it against their interests to participate?

~~~
ur-whale
I don't believe keeping barrier of entry high in an industry is a good thing
for companies in that industry.

Quite the contrary, lowering barriers creates more opportunities and thereby
new scope for growth.

It took MSFT 30 years to finally understand that lesson and start offering a
free suite of dev tools.

The EDA industry still hasn't groked that lesson.

~~~
nickpsecurity
Microsoft knew what they were doing: using secrecy, obfuscated
formats/protocols, copyright law, and patent law to block as much competition
as possible. They made billions in the process. The EDA vendors do something
similar but mostly acquire competitors. There's just three of them covering
the basics parts of ASIC design. They aren't competing on driving prices down,
either.

So, their strategy is smart until they, like Microsoft, get in a situation
where less and less customers need them. I did think one could do a more open
version of EDA, though. I was gonna try to talk to someone at smaller player,
Mentor, but they got bought. Im keeping my eyes open for new players wanting a
differentiator.

------
Havoc
I thought chips are already largely algo designed?

------
spencerg12
nice

