
How to Design a New Chip on a Budget - kensai
https://spectrum.ieee.org/tech-talk/computing/hardware/lowbudget-chip-design-how-hard-is-it
======
FPGAhacker
I believe the major disruption waiting to happen in chip design is open source
tools, and the hardware languages.

Frankly, the closed source tools are pretty awful from a user interface
perspective. Under the hood, amazing things happen, but the tools are pretty
awful to use.

and the languages... ugh, the languages. Our big leap forward was
SystemVerilog. It's a bastardization of three languages: of 80's c++, 80's
Verilog, and an interesting constraint solver in a distinct dialect.

We need (in my humble opinion):

* Open source high performance discrete event simulator (language agnostic, use an intermediate representation like a source mapped netlist)

* Said simulator, but distributed (multi core, process, data center)

* An ecosystem that encourages language development (Said simulator could help)

* An open source synthesis framework that can read the intermediate representation netlist.

* GUIs that don't cause eye hemorrhaging and monitor punching.

~~~
dbcurtis
I worked in EDA for several years, late 80's/early 90's. Its tough. Yes, the
current tools are miserable. But the market isn't huge, so there are only so
many development dollars to go around. And the interfaces between tools are
information-lossy kludges held together with duct tape and string.

Open source is hard, because without access to the data, it is hard to do a
good tool like a timing analyzer. Heck, when you _do_ have access to the data
and you can talk to the physical chemist that designed the process because his
desk is 5 rows over it is _still_ hard to get it better than "close enough".

I've thought many times about doing an open source event-driven logic
simulator. At one point, I was a top guru of that technology. The thing is, I
doubt if a sufficient number of people would care, and it is basically
worthless without all the tools that feed it and drive it.

~~~
FPGAhacker
I would be happy if all my bullet points applied only to cycle accurate sims.

I think the market would create itself with the right ingredients present. I
think a high level language, a well defined intermediate representation with
an eye toward synthesis but initially just targeting simulation, and a cycle
accurate high performance simulator for said intermediate language would be
explosive.

(bonus points for writing the sim in WASM, heh)

~~~
dbcurtis
WASM? Well, if I'm going to tilt at windmills, I'll use the project to learn
Rust while I'm at it.

Explosive? I remain unconvinced. There just aren't enough people doing that
kind of work.

I _am_ excited to see progress in open source FPGA fitters. I've always felt
those would be hard to do because without access to the performance model, it
is hard to do a good job of auto-placement. So that is cool, even though the
current open source tool (I forget the name...) only does a few FPGAs.

~~~
exikyut
If an open source tool exists, people will end up playing with it. That's
categorically not the same of closed source tools - as social creatures we
want to share what we're doing, so if we can't share that we stumbled on a
cool closed-source tool leaked on the internet, what's the point of playing
with it?

So there's that.

I would very much like to hear about your progress on this, FWIW.

Also, I'm not sure, but just in case the following things I was reminded of
are directly/indirectly useful/helpful (for ideas or parts, or maybe some of
the engineers may be interesting to talk to..?)...

\- MARSSx86 - a cycle-accurate x86 emulator built on top of QEMU:
[http://www.marss86.org/~marss86/index.php/Home](http://www.marss86.org/~marss86/index.php/Home)

\- Cling - a C++ interpreter built on top of LLVM's JIT:
[https://root.cern.ch/cling](https://root.cern.ch/cling)

There are several cycle-accurate emulators out there, MARSSx86 is the first I
discovered. I'm not sure if it's useful.

I'm not entirely sure why I'm mentioning Cling. It used to be based on a
custom runtime (and called CINT) that was absolutely massive and was basically
its own C++ implementation. Cling is effectively a very small patch/driver on
top of LLVM's C++ implementation and its JIT runtime.

------
kumarski
[http://efabless.com](http://efabless.com) does 180nm open source in the
browser window and you get a chip in the package at 5K USD or less.

Xfab is on the backend.

------
srcmap
The article mentions "Magic" as open source design tools.

I was hired by UC Davis as Summer Intern to work on porting it from X10 to X11
in 1988.

Very cools to see it still around and being used after 30 years!

~~~
Symmetry
I remember using it in my layout class in 2004 or so. I didn't realize it was
open source, I'm sort of tempted to find it and give it a spin.

------
anfilt
I have thought about this. I have looked at several multi-wafer services for
my own projects. Mainly some interesting analog designs mixed with digital.

For instance [http://cmp.imag.fr/](http://cmp.imag.fr/) has a 350nm process
for only 650 euros per sq mm. However, that does exclude packaging costs which
is quite steep for such a low number of devices.

~~~
yzhou
minimum 5.5mm^2 is required, so we are talking about 3575 euros here. Besides,
it is very rare that you can get a production grade chip on the first try, bug
happens all the time, you might need 2 or 3 or even more tries to get a
working product.

~~~
jacquesm
You might, but there are plenty of examples of first Silicon working, the
trick is to spend a lot of time on getting your simulations right.

------
anonymousDan
How easy is it to take code written for an FPGA and transfer it to run on an
ASIC (e.g. code written in Verilog). Are they so different it would need to be
rewritten from scratch, or could it be done with the equivalent of a a
recompile? Do people often use FPGAs as a stepping stone to a custom ASIC
design? Would learning how to program FPGAs be of any use when it comes to
designing ASICs?

I'm interested in getting into FPGAs, but I can't help but worry they are a
bit of a dead end as any sufficiently popular use case will eventually be
replaced by an ASIC.

~~~
FPGAhacker
FPGAs are definitely not a dead end. By virtue of being reconfigurable, they
will never be obsolete as long as ASICs are a thing. Now, some whole new
technology will come along eventually, supplanting present day ASICs and
FPGAs... but until then...

Program as a term means something different with chip design than it does with
software. An analogy is that to program an FPGA is to paint a canvas. The
source code in chip design is instructions for how the canvas should be
painted.

Another analogy would be to program an FPGA is to cook a meal. The source code
is the recipe for the meal. But one doesn't run a recipe on a meal.

These analogies break down because a painting and a meal is passive... it
doesn't do anything by itself, or react to the outside world.

So another analogy would be building a car. Here "programming" and "building"
are the analogous terms. The instructions for the assembly line to construct
the car is the source code. Once built, the car responds to stimulus (steering
wheel, pedals) and does stuff. Same with the FPGA. It has inputs, it responds
and does stuff. If you painted a picture of a CPU in your FPGA, it could run
software.

There is tremendous overlap in designing for an FPGA and an ASIC. Most ASICs
start life as an FPGA simply to prototype an idea.

The difference between an ASIC and an FPGA, at a high level from a design
perspective, is the difference between writing with a pen vs a pencil.
Learning to write is equally applicable.

It's probably not helpful to think about right now, but an FPGA is actually an
ASIC.

~~~
deepnotderp
The LUT based architecture is starting to run out of steam, I think a CGRA
sort of architecture is the future, but programmable logic startups will
likely fail, and there's approximately a zero percent chance that Xilinx or
Altera would try anything that new.

~~~
anfilt
Problem is you still generally need simple logic to combine some course
grained blocks. Also we have a lot of FGPA's that including adders, RAM, DPS
cores, and more course grained devices.

Honestly, a LUT can be pretty efficient structure for what it does. The
biggest advantage to coarse grained structures is their they are much faster
since the internal construction can use optimal routing.

The biggest issue with FGPA's is the programmable routing/connections. Ideally
each LUT would form a complete graph. However, the number of wires grows at
the approx rate n(n-2)/2 where N is the number of LUTs. So instead the
structure is more hierarchical. Still the majority of silicon on an FPGA is
still just used for routing.

However, I think an array of ALU's actually could be quite useful for some
applications over an FPGA.

~~~
deepnotderp
No, I understand, but the granularity of the LUT exacerbates the routing
problem, because in a CGRA you can route multiple wires at once.

~~~
anfilt
Well this depends on the underlying routing architecture of either system.
However you are right in general since finer grain logic means more things
that need routing.

Nothing stops you from treating LUT outputs in groups like corse grain system
though. FPGA manufactures could make chip with a different routeting topology
that works really well for certain applications.

However, we could be making lots of devices that fit certain data flow
patterns better. By doing so makes the devices simpler and faster.

Routing is pretty important. Its just current FPGAs are built with quite
flexible interconnect.

If you want see really limited programble interconnect look at some old PLDs
that you program by blowing fuses.

------
schappim
The TL;DR; of this article is:

"...a simple ASIC (say one that is a few square millimeters in size,
fabricated using the 250-nm technology node) might cost a few thousand bucks
for a couple dozen samples."

~~~
pjc50
There's a lot of people quoting the price in this thread but very few coming
forwards to say "yes, I've actually done this".

~~~
Hasz
The number of people with the experience, knowledge, cash, and who are not
bound by NDA is very small.

~~~
deepnotderp
Literally everyone who has done this is bound by NDA. Every single foundry
will make you sign one.

~~~
anfilt
Having signed the a couple of those NDAs. The foundries are mainly concerned
about their standard cell library, and any information that may let a
competitor understand the details of their lithographic process. Most
engineers just use the cell library from foundry, but cell library does
contain information about a foundry's process.

A lot foundries will let you make chips without using their cell library. You
some times have to make complete custom components in the analog world. (Be
warned this a ton of work and is no easy under taking)

However, even if you developed your own cell library for a particular foundry.
It will still be tied by an NDA since it may leak information about how the
foundry handles optical proximity correction and uses phase shifting (of
light) for their process to increase resolution. Also how many layers it takes
to implement something and how each of those layers have particular
characteristics may reveal some of the chemistry and material science used to
dope the silicon or create certain structure for their process.

~edit a few typos/omissions~

~~~
Hasz
How much of this was "simulatable" \-- either by Cadence or Magic?

What's the average number of tries to get something right?

How'd you end up in this business?

~~~
anfilt
So cadence for instance lets you create device models. For instance if we are
making a simple inverter some things you would need are: width and height,
channel (width x height), Zero bias voltage, Zero-bias depletion capacitance
(Planar and Sidewall), Channel length, Surface potential, Oxide Thickness,
Carrier Saturation speed, Junction Grading, Area diffusion, Transconductance,
Carrier Mobility,

The list goes on. So once you get this information you can create a model file
that will let simulate your simple NOT gate. However, if you are starting from
scratch such as there is something that does not exist in the standard cell
library. You can't just measure these properties since the device does not
exist yet. So you have run other simulation software to get reasonable values
as calculating them by hand is not pretty. Also to get these values you may
need information from the foundry. For instance Intel's finfet transistor
behave differently in some regards compared to a traditional planar transistor
(mainly the channel). Intel is not going to just tell you how they work so you
can get accurate model of them without an NDA. Also a foundries process can
effect your design as layer thickness can changes such capacitance. So the big
thing is cadence does not let you model

Cadence also can only simulate a limited number of devices. So for large
designs/system you can't simulate the whole thing. So you can only simulate
sub components for big designs. It's also slow to simulate large design again
pushing you to smaller simpler sub components. It's limited to things you can
generate a net list for. It additionally will let calculate cross voltages. I
could keep going on and on, but there is only so much I can cram into a hacker
news response.

Depends on who you are working for and what you are making. However, you
generally use 2 spins for a large device. I bet you have heard the term
engineering silicon. That's usually the first spin. If there are problems
usually the only changes are made to metal (wiring) layers of the masks. If a
serious problem is discovered it may require a complete re-spin. That's if you
mainly using the standard cell library provided by the foundry. If you are
making something from scratch that's whole other story. However, you still
generally build into your design other elements that let you shut off
defective parts or include redundant elements to increase the chance of one
elements working. So the designs also include a lot additional circuits and
logic that you may not be aware of for debugging and testing purposes. If you
can't get the device perfect you still may sell use it and publish an errata
or more specify a more limited range of operation.

I'll just say I am computer engineer. My first job was at Micron.

~~~
kurthr
Yes, God forbid you need native (low threshold) devices... want to minimize
NWell spacing for stacked devices while preventing ESD latch-up... care about
capacitance density or voltage variation.

Basically, if you are designing mixed-signal/analog then your PDK (Process
Development Kit) either comes from a tier1 foundry (TSMC/UMC), the process is
a very good copy (SMIC/GF), or you need a year of support and a one or more
full time process support engineers.

------
vegetablepotpie
Question, I read that fabs use specific, proprietary technology to produce
ASICs and this technology is protected by NDAs and Trade Secrets. Some
components require intimate process knowledge, which may never be open. Why
wouldn't these companies have patented these technologies?

Granted there isn't a lot of love for patents with software, but I think ASIC
design is a case where the concept would be beneficial for the public.
Although the fabs would have a monopoly on their process for a time, it would
atleast be published so that open source tools could be made that take
advantage of these processes.

~~~
analognoise
You said it yourself: "Although the fabs would have a monopoly on their
process for a time, it would atleast be published so that open source tools
could be made that take advantage of these processes."

I know of at least one place that grows their own wafers and does all their
own processing, and publish absolutely nothing about it - no patents, no
external papers, zip, zilch, nada - specifically because other people could
read the patents and figure out what they were doing, so they keep it all
trade secret. It gives them absolutely no advantage to publish about something
it requires enormous capital costs to even be considering, it risks exposing
some of their secret sauce, and it costs tens of millions of dollars (...a
year) to develop and maintain - and you'd want them to what, give it away? For
'open source'?

Open source hasn't even given us a competent PCB package, and has only
recently given us a marginally passable office suite. In 2018.

Let the magnitude of the utter failure of open source just sink in for a
minute, there. Are people really that clueless, that they think "well, someone
will take advantage of this if we just get people to publish about all the
details that someone else spent all the money to create AND maintain!"

I absolutely don't get the circle jerk for open source. It's amazing and
wonderful at times, but let's not pretend it solves real problems. Elon Musk
isn't going to open source his rocket designs anytime soon, but if you want a
selection of 35000 poorly designed MP3 players, open source has your back!!112

If an open source solution solves a real problem, it was government funded
(SPICE, LAPACK, MAXIMA, etc). We all _paid_ for it. The one shining
counterexample is the GNU Compiler - but FFS, they couldn't build a kernel
even after the success of their compiler!

~~~
exikyut
I don't care what anybody else thinks, I 1000% agree with this rant. I feel
similarly strongly about it, and have ever since I discovered the whole open
source thing myself several years ago.

With gcc, IIRC Richard Stallman was basically just playing cat-and-mouse with
feature parity with the commercial compilers for several years. Okay, very
impressive investment in terms of total SLOC, but methinks that's a byproduct
of a brain being able to excel in exactly the field it's really really good
at. Apparently rms wasn't a kernel person (?).

Broadly speaking open source gives me the impression of a bunch of people who
honestly don't know what they're talking about and who aren't really all that
smart. As a collective, that group isn't going to have very good ideas _or_
execution. There are undoubtedly some smart people hiding in the corners, but
their work is shunned or scoffed at because of the collective lack of
intelligence of the whole.

While probably a frustratingly unanswerable question, I've been yearning for
an online community for some time that are like-minded toward the idea that
open source isn't everything and that there are better things out there.

~~~
analognoise
It isn't online: for most of us, that's work. :)

------
loup-vaillant
> _(One study found that the average commercial application contains 35
> percent open-source code.)_

Yay!

 _< Clicks the link>_
[https://info.blackducksoftware.com/rs/872-OLS-526/images/OSS...](https://info.blackducksoftware.com/rs/872-OLS-526/images/OSSAReportFINAL.pdf)

Nooo…

