
Google offers free fabbing for 130nm open-source chips - tomcam
https://fossi-foundation.org/2020/06/30/skywater-pdk
======
Taek
I've spent some time in the chip industry. It is awful, backwards, and super
far behind. I didn't appreciate the full power of open source until I saw an
industry that operates without it.

Want a linter for your project? That's going to be $50k. Also, it's an
absolutely terrible linter by software standards. In software, linters combine
the best ideas from thousands of engineers across dozens of companies building
on each other's ideas over multiple decades. In hardware, linters combine the
best ideas of a single team, because everything is closed and proprietary and
your own special 'secret sauce'.

In software, I can import things for free like nginx and mysql and we have
insanely complex compilers like llvm that are completely free. In hardware,
the equivalent libraries are both 1-2 orders of magnitude less sophisticated
(a disadvantage of everyone absolutely refusing to share knowledge with each
other and let other people build on your own ideas for free), and also are
going to cost you 6+ figures for anything remotely involved.

Hardware is in the stone age of sophistication, and it entirely boils down to
the fact that people don't work together to make bigger, more sophisticated
projects. Genuinely would not surprise me if a strong open source community
could push the capabilities of a 130nm stack beyond what many 7nm projects are
capable of, simply because of the knowledge gap that would start to develop
between the open and closed world.

~~~
m12k
I've been thinking about this a lot lately. In economics, the value of
competition is well understood and widely lauded, but the power of cooperation
seems to be valued much less - cooperation simply doesn't seem as fashionable.
But the FOSS world gives me hope - it shows me a world where cooperation is
encouraged, and works really, really well. Where the best available solution
isn't just the one that was made by a single team in a successful company that
managed to beat everyone else (and which may or may not have just gotten into
a dominant position via e.g. bigger marketing spend). It's a true meritocracy,
and the best ideas and tools don't just succeed and beat out everything else,
they are also copied, so their innovation makes their competitors better too -
and unlike the business world, this is seen as a plus. The best solutions end
up combining the innovation and brilliance of a much larger group of people
than any one team in the cutthroat world of traditional business. Just think
about how much effort is wasted around the world every day by hundreds of
thousands of companies reinventing the wheel because the thousands of other
existing solutions to that exact problem were also created behind closed
doors. Think about how much of this pointless duplication FOSS has already
saved us from! I really hope the value of cooperation and the example set by
FOSS can spread to more parts of society.

~~~
naringas
but let's not forget the non-material nature of software allows this, hardware
will always be a physical (material) artifact.

however the line does blur when talking about blueprints and designs.

in any case, I think that free software movements are a sociological anomaly,
I wonder if there is any academic research into this from an antropological or
an historical economics viewpoint.

also, it seems to me that in some sense the entire market works in
cooperation, just not very efficiently (it optimizes for other things than
efficiency and is heavily distorted by subsidies and tariffs)

~~~
m12k
I'm curious to hear what you mean when you say the entire market works in
cooperation? I mean, strategic partnerships happen, and companies work as
suppliers for other companies. But that's not the market - the market is where
someone wanting to buy something goes and evaluates competing products and
picks the one they want to buy. It's pretty comparable to natural selection,
where the fittest animals survive and the fittest companies get bigger
marketshare while the least fit companies go bankrupt, and the least fit
species go extinct. So I guess you could say that the market functions as an
ecosystem - maybe the word you were looking for was 'symbiosis' rather than
cooperation? Cheetah's aren't cooperating with lions, they are competing - but
relative to the rest of the ecosystem, they exist in a form of symbiosis.

~~~
visarga
'Survival of the fittest' when applied to groups implies cooperation as an
essential skill between individuals. It's not all competition. The fittest
group is not the one made of the most individually fit members, they also need
to function well together.

~~~
jimmySixDOF
_" selfish individuals beat altruistic individuals. Altruistic groups beat
selfish groups. Everything else is commentary."_

-David Sloan & E. O. Wilson

------
est31
This is amazing.

I think the main reason why open source has taken off is because access to a
computer is available to many people, and as cost is negligible, it only
required free time and enough dedication + skill to be successful. For
hardware though, each compile/edit/run cycle costs money, software often has
5-digit per seat licenses, and thus the number of people with enough resources
to pursue this as a hobby is quite small.

Reduce the entry cost to affordable levels, and you have increased the number
of people dramatically. Which is btw also why I believe that "you can buy 32
core threadripper cpus today" isn't a good argument to ignore large
compilation overhead in a code base. If possible, enable people to contribute
from potatoes. Relatedly, if possible, don't require gigabit internet
connections either, so downloading megabytes of precompiled artifacts that
change daily isn't great either.

~~~
mhh__
Not only is the software expensive it's often crap. By which I don't mean, oh
no it doesn't look nice - crap as in productivity-harming.

For example, Altium Designer is probably the most modern (not most powerful
although close) PCB suite and yet despite costing thousands a seat it is a
slow, clunky, _single-threaded (in 2020)_ program (somehow uses 20% of a 7700k
at 4.6GHz with an empty design). Discord also thinks that Altium Designer is
some kind of Anime MMO

~~~
imtringued
From what I can tell a lot of parametric design software is also single
threaded. I felt like this is was an opportunity where usage of multiple cores
could make Freecad stand out a little bit. Except Freecad uses opencascade as
their kernel and they require you to sign a CLA just to download the git
repository. Considering that barrier to just cloning the code I just decided
to not contribute anything. They do offer zip file downloads of the source
code but at that point I lost interest.

~~~
CompAidedPoster
I suspect geometric kernels and 2D/3D renderers don't fall into the "easy to
parallelize" category. Of course there are functions that use multiple
threads, but it's not obvious how you could build the core system to do so.
However the code in CAD software is often pretty old, it wasn't that long ago
that many of these still used intermediate mode OpenGL and I wouldn't be
surprised if some still do.

In the same vein something like ECAD tools don't use GPU-accelerated 2D
rendering but instead use GDI and friends (which used to be HW-accelerated,
but isn't since WDDM/Vista).

A lot of "easy" opportunities to improve UX and productivity.

~~~
jschwartzi
It seems like it depends a lot on your representation of the circuit network.
If you consider each trace and PCB element as a node in a graph which maps the
connections of the traces and PCB elements then you could parallelize provided
you can describe the boundary conditions at each node. There's a degree to
which they're interdependent, but there are also nodes at which the boundary
condition is effectively constant and I think those would make good cut points
for parallelization.

------
leojfc
Strategically, could this be part of a response to Apple silicon?

Or put another way, Apple and Google are both responding to Intel/the market’s
failure to innovate enough in idiosyncratic manner:

\- Apple treats lower layers as core, and brings everything in-house;

\- Google treats lower layers as a threat and tries to open-source and
commodify them to undermine competitors.

I don’t mean this free fabbing can compete chip-for-chip with Apple silicon of
course, just that this could be a building block in a strategy similar to
Android vs iOS: create a broad ecosystem of good-enough, cheap, open-source
alternatives to a high-value competitor, in order to ensure that competitor
does not gain a stranglehold on something that matters to Google’s money-
making products.

~~~
Nokinside
These are not related at all. Only common element is making silicon.

Apple spends $100+ millions to design high performance microarchitecture to
high-end process for their own products.

Google gives tiny amount of help to hobbyists so that they can make chips for
legacy nodes. Nice thing to do, nothing to do with Apple SoC.

\---

Software people in HN constantly confuse two completely different things

(1) Optimized high performance microarchitecture for the latest prosesses and
large volumes. This can cost $100s of millions and the work is repeated every
few years for a new process. Every design is closely optimized for the latest
fab technology.

(2) Generic ASIC design for process that is few generations old. Software
costs few $k or $10ks and you can uses the same design long time.

~~~
_bxg1
> Nice thing to do

I don't believe Google does anything because it's a "nice thing to do".
There's some angle here. The angle could just be spurring general innovation
in this area, which they'll benefit from indirectly down the line, but in one
way or another this plays to their interests.

~~~
sixothree
Google has never created a product that does not collect data in a unique
manner apart from its other products.

~~~
ladyanita22
They must be some kind of genius. I don't see how are they going to be able to
extract personal information out of here.

~~~
sixothree
They're not doing this out of the kindness of their heart. Just because we
don't know the data being collected here (yet) does not invalidate my
statement. Name a google product and you can easily identify the unique data
being collected.

------
timerol
TFA doesn't really summarize what's available very well, so let me take a shot
from a technical perspective:

\- 130 nm process built with Skywater foundry \- Skywater Platform Development
Kit (PDK) is currently digital-only \- 40 projects will be selected for
fabrication \- 10 mm^2 area available per-project \- Will use a standard
harness with a RISC-V core and RAM \- Each project will get back ~100 ICs \-
All projects must be open source via Git - Verilog and gate-level layout

I'm curious to see how aggresive designs get with analog components, given
that they can be laid out in the GDS, but the PDK doesn't support it yet.

~~~
raverbashing
I'm thinking 10mm^2 so kinda 3.3mm x 3.3mm. In 130nm

(Remember 130nm gave us the late models Pentium 3s, the second model P4s and
some Athlons, though all of these had a bigger die size)

I'm thinking you could have some low power or very specific ICs, where these
would shine as opposed to a generic FPGA solution

~~~
moonchild
> 10mm^2 so kinda 3.3mm x 3.3mm

3.16mm x 3.16mm?

~~~
raverbashing
Square root by head is hard ;)

------
DCKing
From a hobbyist and preservation perspective, it would be cool if this could
be used to produce some form of the Apollo 68080 core to revive the 68k
architecture a little bit, and build out the Debian m68k port [0][1]. The last
"big" 68k chips were produced in 1995 (that would be a 350nm process?) so this
could be hugely improved on 130nm. The 68080 core is currently implemented in
FPGAs only and is already the fastest 68k hardware out there. With a real
chip, people could continue upgrading their Amigas and Ataris.

[0]: [http://www.apollo-core.com/](http://www.apollo-core.com/) I can't easily
find how "open source" it is though, but it's free to download.

[1]:
[https://news.ycombinator.com/item?id=23668057](https://news.ycombinator.com/item?id=23668057)

~~~
pjc50
What happened to freescale/NXP "Coldfire"?

~~~
herio
ColdFire is still around but is also not fully binary compatible with 68k.
There have been attempts at making Amiga accelerator cards using Coldfires,
but I don't think I've ever seen one that was fully finished.

~~~
cmrdporcupine
It's not fully binary compatible but pretty damn close. People working on the
Firebee were able to make it run pretty smooth, there's some things you can do
to trap the old instructions and rewrite. It's never going to work for games
and the like, but games and the like from that era have all sorts of other
video hardware specific dependencies that are even more difficult to satisfy.

It really is "spiritually" a 68k series processor, just with some cleaning up.
I like it.

------
d_tr
I believe that a lot of people here might be interested in "Minimal Fab",
developed by a consortium of Japanese entities.

These are kiosk-sized machines that a company can use to set up a fab with a
few million dollars. Any individual can then design a chip and have it
fabricated very (as in "I want to make a chip for fun") affordably.

I was not able to find a ton of information on this, but the 190nm process was
supposedly ready last year and there were plans to go below this. The wafers
are 12mm in diameter (so basically, one wafer -> one chip) and the clean room
is just a small chamber inside the photolithography machine. There are also no
masks involved, just direct "drawing".

------
patwillson22
My advice to anyone who's looking for a pathway into open source silicon is to
look into E-Beam lithography. Effectively E-Beam lithography involves using a
scanning electron microscope to expose a resist on silicon. This process is
normally considered to slow for industrial production but it's simplicity and
size make it ideal for prototyping and photo mask production.

The simplistic explanation for why this works is that electron beams can be
easily focused using magnetic lenses into a beam that reaches the nano meter
level.

These beams can then be deflected and controlled electronically which is what
makes it possible to effectively make a cpu from a cad file.

Furthermore, It's very easy to see how the complexity of photolithography goes
up exponentially as we scale down.

Therefore I believe it makes sense to abandon the concept of photolithography
entirely if we want open souce silicon. I believe that this approach offers
something similar to the sort of economics that enable 3D printers to become
localized centers of automated manufacturing.

I should also mention that commercial E-beam machines are pretty expensive
(something like 1-Mil) but that I dont think it would be that difficult to
engineer one for a mere fraction of that price.

~~~
namibj
I suggest you take a look at how easy maskless photolithography is:
[https://sam/zeloof.xyz](https://sam/zeloof.xyz)

Theoretically it should be feasible to fab 350 nm without double-patterning by
optimizing a simple immersion DLP/DMD i-line stepper.

I think ArF immersion with double-patterning should be able to do maskless 90
nm.

~~~
kanwisher
fixing the url [http://sam.zeloof.xyz/](http://sam.zeloof.xyz/)

------
why_only_15
How much has the power efficiency improved between 130nm and 7nm? Is it
plausible to get better performance/watt for a custom chip on 130nm vs a
software application running on a 7m chip? I get that hardware has other
benefits but just wondering for accelerators where the cost/benefit starts to
make sense.

~~~
pjc50
> Is it plausible to get better performance/watt for a custom chip on 130nm vs
> a software application running on a 7m chip?

This very, very much depends on what the algorithm is (integer or FP? how data
dependent?), but I would say no for almost all interesting cases.

The only exception would be if you're doing a "mixed signal" chip where some
of the processing is inherently analogue and you can save power compared to
having to do it with a group of separate chips.

Another exception might be _low leakage_ construction, because that gets worse
as the process gets smaller. This is only valuable if your chip is off almost
all of the time and you want to squeeze down exactly how many nanoamps "off"
actually consumes.

~~~
awelkie
An open source WiFi chip would be super cool. I wonder how easy it would be to
take the FPGA code from openwifi[0] and combine it with a radio on the same
chip?

[0] [https://github.com/open-sdr/openwifi](https://github.com/open-
sdr/openwifi)

~~~
pjc50
The problem is that analogue IC design is a field that even digital IC design
people regard as black magic. It's clearly _possible_ for that to happen but
the set of people who have the skills to do it is very narrow and most of them
are probably prevented from doing it in their spare time by their employment
agreements.

I wonder how many "test chips" Google will let a non-expert team do to get it
right? And whether they provide any "bringup" support?

~~~
monocasa
They're only allowing parts that stay within the bounds of the PDK (which only
allows digital designs) for now.

------
xvilka
I should note there's open source ASIC toolchain - OpenROAD[1][2]. I wonder if
these can be integrated. You also can use SymbiFlow to run your prototype in
FPGA[3][4].

[1] [https://theopenroadproject.org/](https://theopenroadproject.org/)

[2] [https://github.com/The-OpenROAD-Project/OpenROAD](https://github.com/The-
OpenROAD-Project/OpenROAD)

[3] [https://symbiflow.github.io/](https://symbiflow.github.io/)

[4] [https://github.com/SymbiFlow](https://github.com/SymbiFlow)

~~~
nihil75
Spot on. Both are discussed by Tim in the video as part of the solution stack.

------
StillBored
Because apparently no one remembers the other "free" fab service.

[https://www.themosisservice.com/university-
support](https://www.themosisservice.com/university-support)

Previously MOSIS would run select a few student/research designs to go along
with the commercial MPW runs, frequently on pretty modern fabs. I'm not really
sure how much they still run.

(oh here is the MOSIS/TSMC runs for this year
[https://www.mosis.com/db/pubf/fsched?ORG=TSMC](https://www.mosis.com/db/pubf/fsched?ORG=TSMC))

~~~
dasudasu
But this one is open to hobbyists. If you're doing a graduate degree in ASIC
design and your group doesn't have the funds to do simple fab runs, you're
probably in a somewhat questionable program to begin with.

------
kingosticks
> All open source chip designs qualify, no further strings attached!

Surely they have some threshold requirements that the thing actually works?
How is this going to work? I mean, if there's no investment required from me,
what's the incentive for me to verify my design properly? What's the point in
them fabbing a load of fatally bugged open-source designs?

~~~
jsnell
All open source designs qualify, doesn't mean they get selected :/ If you look
at the slides, they say that each run will be 40 designs, and they'll do one
run this year and multiple next. Criteria for how they'll choose if there are
more than 40 applicants TBD.

------
moring
With the PDK being open, does anyone know if any kind of NDAs are still
required to get a chip fabbed? While free-of-charge fabbing is quite nice, I
think being NDA-free is even more important so all work including the tweaks
necessary for fabbing can be published, e.g. on GitHub.

BTW, it will be nice to try this together with the OpenROAD tools [1]. They
have support for Google's PDK on their to-do list (planned for q3, but I doubt
it will be ready that fast).

[1] [https://github.com/The-OpenROAD-Project](https://github.com/The-OpenROAD-
Project) [https://theopenroadproject.org/](https://theopenroadproject.org/)

~~~
lowwave
yup, NDAs destroys economic productivity.

~~~
lowwave
From talking to VC. NDA are useless. It really comes down to whether you trust
the people or not. It is like patent just a gesture. Everything is in the
implementation.

------
tdonovic
That sounds pretty huge. I've never seen on Hackaday or similar people getting
small runs of chip fabbed. What are the broader implications of this? Will
other fabs start to lower the barrier to production as well?

~~~
ohazi
You generally don't do small runs of chips, unless cost is no object. The NRE
costs of getting the masks made, even on older processes like these are still
comfortably in the $X00,000 range, blowing past $1 million pretty quickly if
you need a process that isn't ancient. That's without design software
licenses, which can be hundreds of thousands more.

So the minimum order quantity usually needs to be at least in the tens to
hundreds of thousands of chips if you don't want each chip to be a sizeable
chunk of that initial cost.

It would be really nice to get to the point where small batch chips were
viable though. One aspect is cost -- if they could get the NRE cost down to,
say, $20k - $50k, and the software licensing cost down to zero, that would
open up a lot of options.

The other aspect is the "dark art" nature of the process kit and communicating
with the fab. If everybody assumes that chip design is expensive, they're
going to be reluctant to even talk to the fab to see what options are
available. If they see a bunch of people building interesting things with this
shuttle program, then all of a sudden the fab is going to see more business
interest as people try to figure out if there's a way to make their project
work.

~~~
MayeulC
There are cheap-ish multi-project wafers (MPW).

These organizations typically also gives access to software design tools. But
that's still a sizeable investment. Last project I've worked on used a (more
expensive than usual I think) GloFo 22nm technology. Price was around €9k/mm²,
9mm² was the minimum area. Still much more accessible to academia than
individuals or open source projects, but not out of the realm of a
crowdfunding campaign.

There are multiple chips that ought to be open source, broadly available, and
cheap: AV1 decoders, small FPGAs, Wi-Fi or SDR chips, TMPs, and other crucial
pieces for security, DIY/open HW projects, and basic computer building blocks.
Most interesting to me are chips that would allow novel applications that
commercial ventures would never look at, like open, hackable p2p WiFi meshes,
or emulators-on-a-chip, or other application-specific coprocessors (protein
folding, etc).

[1] ttps://mycmp.fr/technologies/process-catalog/

[2] [https://europractice-ic.com/](https://europractice-ic.com/)

~~~
namibj
A while back I came up with the idea of an ultra-miniature quadro copter with
asynchronous outrunner motors who's stators would most likely be sintered
(with or without a ferromagnetic matrix) to handle the power density, and a
simple tube-shaped rotor (though a squirrel cage style might be better).

I'm thinking 5-20 mm rotor diameter (3M-750k rpm transonic limit), or maybe
even smaller.

The interesting part would be an analogue ASIC that decodes an external
control signal modulated onto the microwave (via rectenna) or optical (solar
cell/photodiode) "wireless power" beam.

Demodulation would first do naive rectenna-based AM demodulation, followed by
a bandpass and FM demodulation, revealing 12 carriers corresponding to the 4
3-phase motors, which are just FM-demodulated to yield the H-bridge control
signals.

These would primarily be one xx MHz PLL and 12 lower-frequency ones spaced
50-200 kHz (the FM subcarrier's bandwith (assuming narrow-band FM) is twice
the maximum motor field frequency), starting as low as feasible while still
being able to use AC-coupling liberally.

Also either some amplifiers for (potentially-overdriven) "linear" H-bridge
operation or (NE555-like?) PWM chopper drivers to exploit the winding
inductance for less-wasteful H-bridge operation.

Far too much to realize in discrete circuitry, but nothing really fancy beyond
a parametric PLL design. And not really realistic for a μC, either, because of
brown-out resilience and overall latency.

At least the polyphase induction motors are very easy to drive, compared to
the typical 3-phase permanent magnet outrunner motors used in most
multicopters.

Depending on how predictable the effects of some tuning parameters are,
maskless litho could allow for chips to be tuned to measured electro-
mechanical properties of these sintered motors, reaching optimal drive
waveforms. And for digital circuits, hard-wired ROM (security/shelf
life/radiation-hardness) for individual chips or even doping-controlled ROM
for anti-readout private/secret key storage.

I expect a maskless double-patterning ArF+immersion process allowing NDA-free-
usage to be "the" thing that would enable true state-of-the-art
experimentation and true ASICs (where the prototype needs an ASIC to be more
than a paperweight after some photoshoots and staged interactions).

Feel free to contact me/let me know if you'd like further discussion(s).

------
cromwellian
This reminds me of how cubesats kind of got off the ground because some launch
commpanies allowed extra spare capacity to be sold or donated to student
projects.

~~~
hinkley
I couldn't tell which was the cart and which the horse, but the SpaceX
telecommunications satellite launch I saw had a rideshare arrangement going
on. I suspect what happened is that someone only needed half a payload, and
SpaceX filled the rest with their own stuff. But the PR person made it sound
like the opposite was happening.

I'm not sure what happens when they reach full capacity on their sat network.
Space for research projects, or launching surplus consumables?

------
quyleanh
Well done Google. But there is still problem with EDA tool license... Is there
any replacement for Cadence Virtuoso tool for chip design?

~~~
stephen_g
The efabless people that the talk mentions a bunch of times are using a fully
open-source design flow but I think it's a bit hacky (as in, a bunch of
command line tools from various open source projects, some of which may be
unmaintained judging by the commit logs). They seem to have successfully
fabricated a RISC-V based SoC with it though, which is crazy cool.

As somebody with a decent amount of FPGA experience, having a go at setting
this software up and seeing if I can get anything to synthesise and through
place and route is something I've been intending to have a play with, but I
haven't had the spare time.

It uses yosis for synthesis and a few other tools for the rest of the process,
and is called Qflow -
[http://opencircuitdesign.com/qflow/index.html](http://opencircuitdesign.com/qflow/index.html)

~~~
MayeulC
IIRC there are a couple ways to produce the intended design. At the end of the
day, fabs often take layouts in the GDSII format, which is documented and
open. The Klayout open source visualizer is industry-standard in my
experience.

Now, how do you generate these layouts? It depends on what you are doing. If
more on the experimental side of things, writing scripts to generate
structures is fine, as long as these conform to the fab-provided design rules.
Technically, that's still what everyone is doing at the industrial level,
except the scripts -- often written in tcl -- are provided by the fab.

Now if you have some FPGA experience, you are probably interested in logic
synthesis tools. There are a few ones, I've seen some academic with their own
place-and-route stage, for instance. [https://open-src-
soc.org/program.html#T-CHAPUT](https://open-src-soc.org/program.html#T-CHAPUT)
does that, I think.

The slides linked above outline one of the possible ways to do this: leverage
chisel ( [https://www.chisel-lang.org/](https://www.chisel-lang.org/)) and the
FIRRTL intermediate representation for RTL description. A few tools can ingest
the output and try to come up with a layout. Hammer ([https://github.com/ucb-
bar/hammer](https://github.com/ucb-bar/hammer)) is such a tool, but I don't
think that PDK is available with it just yet. To be honest, I don't think
commercial tools are _that_ advanced, and it would be fairly doable to catch
up.

There is some interesting work in this field, but since fabbing is expensive,
it tends to be more within the academic community than the free software one.
I'd look for papers, not on Github, though that's slowly changing.

The chip design world is a slow beast to turn around: everything in the
fabrication process is optimized to maximize yield, hence very little leeway
is allowed: "If it ain't broken, don't fix it" is the motto, for good reason:
if changing humidity levels 0.2% can make a fab lose millions; they won't try
to use new and experimental software.

I'm watching this space, notably with Verilog alternatives such as Migen. The
open source community starts to embrace FPGA, wich is already great. I wish
more manufacturers opened up their bitstream, so maybe we need an open FPGA?
Though this free fabbing offer would be a great fit for Wi-Fi chips, I think.
I wonder if People at openwifi ([https://github.com/open-
sdr/openwifi](https://github.com/open-sdr/openwifi)) are interested?

I hope that gives a few interesting pointers to whoever reads this :)

~~~
orbifold
Hammer is just a driver for tools that cost >100k to license. And that doesn't
include access to memory compilers, which you would also need.

~~~
seldridge
There is an open PR adding support for the OpenROAD tools [^1]. So, there
should be a flow that uses open source VLSI tools eventually.

The Google 130nm library is still filling a huge gap as all the open PDKs up
to this point were "fake" educational libraries, e.g., FreePDK [^2]. You can
run them through the a VLSI flow, but you can't tape them out.

[^1]: [https://github.com/ucb-bar/hammer/pull/584](https://github.com/ucb-
bar/hammer/pull/584)

[^2]:
[https://www.eda.ncsu.edu/wiki/FreePDK](https://www.eda.ncsu.edu/wiki/FreePDK)

------
dTal
Can anyone venture a guess as to why Google might be doing this? What's the
incentive structure here?

~~~
Koffiepoeder
The market of skilled hardware designers is running low: training costs are
high, complexity has skyrocketted. By doing this they can increase attention
to a field that is otherwise dominated by big corporates (already somewhat the
case). The only way to have a sane and healthy chip market is to make 1) the
entry barrier low and 2) stimulate innovation. This does both of that.

~~~
novaRom
Another point is that silicon-related tech is currently leaving US and begins
booming in China.

------
truth_seeker
Any Chisel developers here ?

How fast is the iterative development and library ecosystem compared to native
traditional RTL design tools ?

~~~
seldridge
I'm one of the Chisel devs.

My biased view is that iterative development with Chisel, to the point of
functional verification, is going to be faster than in a traditional RTL
language primarily because you have a robust unit testing framework for Scala
(Scalatest) and a library for testing Chisel hardware, ChiselTest [^1].
Basically, adopting test driven development is zero-cost---most Chisel users
are writing tests as they're designing hardware.

Note that there are existing options that help bridge this gap for
Verilog/VHDL like VUnit [^2] and cocotb [^3].

For libraries, there's multiple levels. The Chisel standard library is
providing basic hardware modules, e.g., queues, counters, arbiters, delay
pipes, and pseudo-random number generators, as well as common interfaces,
e.g., valid and ready/valid. Then there's an IP contributions repo (motivated
by something like the old tensorflow contrib package) where people can add
third-party larger IP [^4]. Then there's the level of standalone large IP
built using Chisel that you can use like the Rocket Chip RISC-V SoC generator
[^5], an OpenPOWER microprocessor [^6], or a systolic array machine learning
accelerator [^7].

There are comparable efforts for building standard libraries in SystemVerilog,
notably BaseJump STL [^8], though SystemVerilog's limited parameterization and
lack of parametric polymorphism limit what's possible. You can also find lots
of larger IP ready to use in traditional languages, e.g., a RISC-V core [^9].
Just because the user base of traditional languages is larger, you'll likely
find more IP in those languages.

[^1]: [https://github.com/ucb-bar/chisel-testers2](https://github.com/ucb-
bar/chisel-testers2)

[^2]: [https://vunit.github.io/](https://vunit.github.io/)

[^3]: [https://docs.cocotb.org/en/latest/](https://docs.cocotb.org/en/latest/)

[^4]: [https://github.com/freechipsproject/ip-
contributions](https://github.com/freechipsproject/ip-contributions)

[^5]: [https://github.com/chipsalliance/rocket-
chip](https://github.com/chipsalliance/rocket-chip)

[^6]:
[https://github.com/antonblanchard/chiselwatt](https://github.com/antonblanchard/chiselwatt)

[^7]: [https://github.com/ucb-bar/gemmini](https://github.com/ucb-bar/gemmini)

[^8]: [https://github.com/bespoke-silicon-
group/basejump_stl](https://github.com/bespoke-silicon-group/basejump_stl)

[^9]:
[https://github.com/openhwgroup/cva6](https://github.com/openhwgroup/cva6)

~~~
truth_seeker
Gracias.

------
novaRom
Can someone please tell me how photo-masks are produced? I don't understand
how can tiny features be printed at almost the same scale as a final
structure? With a laser beam?

Say, as an input you have a layer description (schematics) - how can you
transfer it to a tiny scale so precisely to produce a mask?

~~~
ric2b
They aren't built at the same scale, they're much larger than the final
structure and lenses are used to scale the image down to the desired size.

Here's a video form Intel on how they are made:
[https://youtu.be/u3ws0UebnSE](https://youtu.be/u3ws0UebnSE)

Apparently they use "electron beams", not sure what those are, they sound
similar to lasers but with electrons, from this video:
[https://youtu.be/PWV9pvdRBNY](https://youtu.be/PWV9pvdRBNY)

~~~
asgeir
Wouldn't that just be something like CRT?
[https://en.wikipedia.org/wiki/Cathode-
ray_tube](https://en.wikipedia.org/wiki/Cathode-ray_tube)

~~~
imtringued
It's probably closer to how an electron microscope works.

------
WatchDog
My understanding of ASIC production, is that new circuit designs are capital
intensive, they require masks to be produced and machines to be configured for
the given pattern.

Are older processes more automated?

Can the 130nm production line, produce many different designs without any
manual intervention?

~~~
bradstewart
Mask sets will still be required for each chip (to my knowledge), but they are
significantly cheaper on older processes like 130nm.

The process design kit (or PDK) mentioned in the article takes care of
"configuring the machiines". The PDK provides describes how to construct low-
level primitives (the instruction set, if you will) for the specific fab.
Designers then layer on their logic circuits using those primitives.

------
phendrenad2
This is cool! But I fear the workflow to get a chip out the door requires a
lot of niche specialized knowledge. Making a logic design work on an FPGA is
much easier, because the chip overhead (stuff like I/O pins) is all handled
for you. If I had to design my own I/O pins at the silicon level, I wouldn't
know where to start. And having access to open-source tools that give me the
ABILITY to build a chip doesn't help with the _knowledge_ I'm missing.

I think, however, that this may help Google integrate into academia. I can
imagine a lot of MSEE and PhD students are looking at this hungrily.

~~~
riking
The project comes with a standard harness around your 10mm^2 design, with
provided I/O and a working RISC-V supervisor CPU.

------
makapuf
Interesting, 130nm was achieved by Pentium III as a reference:
[https://en.m.wikipedia.org/wiki/130_nm_process](https://en.m.wikipedia.org/wiki/130_nm_process)

------
jitendrac
That is great. It will encourage new hobbyist opensourse Eco-system around
hardware community just like many FOSS communities.

Even engineers/students from countries with less resources will now be able to
design make prototypes in viable way.

------
cmrdporcupine
I have a friend who has a Verilog clone of the C64 VIC-II chip, which he has
interfaced into a real C64 and it's running pretty much everything, demos,
etc. even supports weird things like the lightpen.

I wonder if his project would fit the bill here... real VIC-II chips are dying
all over the place and getting hard to find... manufactured ASICs to replace
them could be a popular item....

------
ajb
So, this doesn't appear to have been announced by google. It does seem to be
real but the OP may be jumping the gun a bit.

The authoritative source seems to be the slides of this guy at google:
[https://docs.google.com/presentation/d/e/2PACX-1vRtwZPc8ykkk...](https://docs.google.com/presentation/d/e/2PACX-1vRtwZPc8ykkkgtUkHkoJZrP9jKOo3FYdKqbg-
So0ic6_kx7ha1vHnxrWmuxWkTc9GfC8xl0TfEpMLwK/pub?start=false&loop=false&delayms=3000#slide=id.g8a02ce4cad_0_207)

From the slides this is "current plans, subject to change". This is an 'open
source shuttle process'. Shuttle processes are a relatively cheap way of
making _small numbers_ of chips (it is actually more costly per chip, but the
fixed cost is smaller). There will be some kind of approval process, and I
would imagine that there is a capacity limit for both the number of chips and
number of projects.

(I didn't have time to watch the talk, so the above is just from the slides)

~~~
cottonseed
Maybe watch the talk first before commenting? All these questions are answered
in the talk.

~~~
ajb
Well since you evidently did have time to watch the talk, perhaps you also
have time to enlighten us about these questions. If not, maybe don't criticise
those of us who used at least some of our time to dig out some information for
the thread.

~~~
mav3rick
You could have just watched the talk instead of reprimanding him.

------
jcun4128
How "bad" is that compared to standard/common 14nm, etc...

~~~
DCKing
130nm was used to make the Athlon XP, Athlon 64, Pentium M, Pentium 4 and
PowerPC G5 in the 2001-2003 timeframe [0]. So at the peak of 130nm's
performance spectrum, it was able to produce stuff that can still run 2020
software quite okay. The Athlon 64 is probably the best 130nm silicon produced
in its heyday and it's in the ballpark of a Raspberry Pi 4 (which has a 28nm
SoC) in single core benchmarks.

I don't think this program is meant or likely to produce high frequency
100mm2+ chips (and it's worth remembering those chips had a lot of engineering
effort put in them outside of manufacturing process) but it should permit
chips of somewhat decent performance. It's a very generous thing!

[0]:
[https://en.wikipedia.org/wiki/130_nm_process](https://en.wikipedia.org/wiki/130_nm_process)

~~~
walrus01
As I recall from that generation, also the first models of AMD opterons,
commonly built into dual socket motherboards. For the time they were very
speed competitive with the Intel option.

~~~
innocenat
I think at that time, Opterons was THE server processor.

~~~
walrus01
I recall the dual socket (everything was single core at the time) Xeon being
particularly unimpressive.

In fact it was somewhat of a step backwards from the better thermals/power
efficiency of a dual socket, 1.13 to 1.4 GHz / 512KB cache Tualatin pentium 3.

------
gentleman11
Simple question: aren’t you basically not allowed to make chips because of
patents? Not literally forbidden, but aren’t there so many patents that you
can’t really work without violating one, even if you have never heard of it or
the technique before? It just sounds so hazardous

------
awalton
Is enough of the PDK open now to allow for actual hacking on devices? I have a
rather simple analog chip I'd love to make for my own personal uses (I'd love
a really long modern bucket brigade device to build gritty analog delay lines
for synth hacking)...

------
chvid
Sounds interesting but what you build as an open source chip?

I mean 130nm is 20 year old technology and you can buy general purpose CPUs
today which are night and day faster than anything made with 130nm. Allowing
you to emulate anything specialized using sofware.

~~~
Symmetry
Gate level emulation is really, really slow. If you've got a nice abstraction
like the x86 ISA you can simulate a chip at that level far faster but if
you're interested in the net level design rather than the abstraction
emulation will be way, way slower. At least in throughput, it takes a long
time to fab a chip and so you really ought to do emulation first in any event.

And for gate/line level effects things get slower still. Back when I was doing
my master's thesis I was running simulations over the weekend on sequences of
100s of instructions in SPICE.

------
navanchauhan
Can I in theory build one optimised for running one program? Will it be of any
benefit?

~~~
riffraff
yes, but it depends on the program, that's what the whole ASIC industry for
bitcoin mining is.

~~~
navanchauhan
I was thinking about AutoDock Vina ( Molecular Docking Software ), I have
literally 0 knowledege about hardware :(

Then again, this is going to be a really fun experience

~~~
dekhn
Yes, you can build dedicated ASICs. No, it's not worth it for docking
software.

~~~
ascorbic
According to Wikipedia there has been some success building FPGAs for
Autodock, so maybe it could be.
[https://en.m.wikipedia.org/wiki/AutoDock](https://en.m.wikipedia.org/wiki/AutoDock)

~~~
dekhn
yes but since drug discovery isn't bottlenecked by virtual docking throughput,
it doesn't matter.

We've seen similar approaches applied to BLAST, and in the end, everybody ends
up giving up the ASIC or the FPGA because it's not cost effective long-term.

------
blackrock
I wonder if you can make micro machines at this level? The MEMS thing.

I always wondered why you needed gearing mechanisms in a micro machine. Has
there ever been a practical application for gears in MEMS?

~~~
stephen_g
Not with this PDK or process, no. MEMS processes are quite specialised, and I
beleive this project only supports digital standard cells currently, with IO
and analogue/RF stuff coming out eventually (it's on the roadmap in the
slides).

~~~
qaute
> I wonder if you can make micro machines at this level? The MEMS thing.

At this size range, though state-of-the-art MEMS (mechanical vibrating
frequency filters for RF receivers in phones, accelerometers) can have
sub-100nm dimensions, basic accelerometers, pressure sensors, and inkjet heads
are absolutely doable.

> Not with this PDK or process, no. MEMS processes are quite specialised.

But yeah, this is the problem. Although ICs and MEMS devices are made with
similar tools, MEMS usually needs processing steps that don't play nicely with
the steps in an IC process (e.g., etching away huge amounts of silicon to
leave gaps and topography, or using processing temperatures and materials that
mess up ICs). This SkyWater process cannot do MEMS.

A more general problem is that different MEMS devices often need different
incompatible process steps, so a standardized process is infeasible (though
[http://memscap.com/products/mumps/polymumps](http://memscap.com/products/mumps/polymumps)
tries).

However, there is a tiny chance that, if we get enough detail on the process
steps and leeway in the design rules, a custom layout could implement a
rudimentary accelerometer or something that works after post-processing (say,
a dangerous HF bath), but only with intimate knowledge of said process steps
(e.g., internal material stress levels) and a lot of luck.

------
ibobev
Could someone explain, is there any advantage of producing a 130nm custom SoC
compared to using a lower node FPGA for the same design?

~~~
rnestler
Current consumption. FPGAs are quite energy intensive compared to ASICs.

Also for analog stuff you can't use FPGAs. And if you need an ASIC anyways for
that why not include the digital part as well?

~~~
Symmetry
The crossover point where ASICs become less expensive than FPGAs is also lower
than you might think even including mask costs, provided it's on an older
process node.

------
derefr
So, until now, there's been this niche for FPGAs, where people would buy them
in decent numbers to use _with static programming, in production devices_ ,
simply because they needed some custom DSP or some-such, but the capital costs
of an ASIC fab-run would be a killer for their project.

Has this announcement thrown that use-case for FPGAs out the window?

~~~
gsmecher
The economics of FPGAs and ASICs over the past few decades are covered really
well in [1]. It's almost always about production volume. The FPGA's ability to
be reprogrammed is often a convenient side-effect.

In short, no, this doesn't impact the trade-offs and wouldn't even if Google
provided this service as a commercial print-on-demand offering. You can get an
ASIC fab'd on older nodes for surprisingly cheap, if you have the know-how and
access to tools [2].

When power is a first-class design problem (it frequently is), even an "old"
28nm FPGA like Xilinx's 7 series will run rings around an 130nm ASIC. The
extra silicon you're powering in the FPGA is more than offset by the
economical access it gives you to modern nodes with lower voltages.

[1]:
[https://ieeexplore.ieee.org/document/7086413](https://ieeexplore.ieee.org/document/7086413)
[2]: [https://spectrum.ieee.org/tech-
talk/computing/hardware/lowbu...](https://spectrum.ieee.org/tech-
talk/computing/hardware/lowbudget-chip-design-how-hard-is-it)

------
chrisshroba
Could anyone offer an explanation of what this means, for all of us who have
no experience with hardware at all?

------
lokl
Could this be suitable for a camera sensor? I don't know anything about
hardware, but I am intrigued by the idea of exploring new camera sensor ideas.

Edit: Nevermind, another comment says 10 mm^2 per project. That's probably too
small for the type of camera sensor I have in mind.

------
matheusmoreira
That's really awesome. If that gives us widely available open source hardware,
our computing freedom will always be safeguarded. We'll always be able to run
any software we want even if the hardware is not as good as proprietary
designs.

------
dooglius
If you wanted to do something with a hardware root-of-trust, would the GDSII
leak needed secrets (i.e. any private keys could be extracted by looking at
what you're required to open up), or is that done in some special post-fab
way?

~~~
yjftsjthsd-h
Could you burn in the private key using fuses?

~~~
riking
Take a look at the Google Titan chip slides for an idea of how to implement
this:
[https://www.hotchips.org/hc30/1conf/1.14_Google_Titan_Google...](https://www.hotchips.org/hc30/1conf/1.14_Google_Titan_GoogleFinalTitanHotChips2018.pdf)
Video:
[https://youtu.be/ve_64dbM4YI?t=3089](https://youtu.be/ve_64dbM4YI?t=3089)

Specifially, slides 35-40. You burn a feature fuse to unlock manufacturing
test features. The device is personalized with a serial number + told to
generate private key + record stored in database. Then, the key is locked in
by burning a second feature fuse that disables any future writing to those
segments.

------
ur-whale
This is fantastic, for many reasons, but the two that come immediately to mind
are:

    
    
        - amazingly good for security.
        - finally the public at large will get to understand *in details* how an ASIC is designed.

~~~
chrismorgan
Meta: please don’t use preformatted text for lists. It makes reading much
harder, especially on narrower displays. Just put a blank line between each
item, treat each as a paragraph.

~~~
ur-whale
Thank you for listing your personal preferences, but I also happen to have
mine.

------
yummypaint
Anyone have a sense of how easy it is to audit/verify devices made with this
process? I would love to see some properly trustworthy chips for end-to-end
encryption come out of this.

------
neop1x
Hmm, sounds like Google is hunting for chip design talents. :) We will fab
your design and if it looks good, come work for us. :P

------
kwccoin
The difference is one is pure public good but hardware is not that pure. And
hence you have other concerns and other means.

------
unnouinceput
Quote: " All open source chip designs qualify, no further strings attached!"

There is no such thing as free lunch! I really wonder what is Google's game
plan with this. 20 years ago they started to made maps + email + office +...
free for everybody, but the game plan was they gathered everything about
everybody, so now we know. Sorry Google, I don't trust you one bit anymore.

------
threshold
Fantastic Google! Dream come true

------
resters
It would be nice to get all of the HDL used by the HPSDR project fabbed this
way.

------
mysterydip
Would this make it possible to reproduce some historic-but-rare chips like the
4004?

~~~
jecel
This is a 0.13µm CMOS process while the 4004 was made using a 10µm PMOS
technology. So the electrical characteristics would not be the same. If you
don't care about that then the answer is "yes".

An attempt to do something like this would have a Z80, a 6502 and a 68000 in a
single chip (none of them are rare, however):

[https://www.crowdsupply.com/chips4makers/retro-
uc](https://www.crowdsupply.com/chips4makers/retro-uc)

~~~
jabl
Well, the 6502 was famously NMOS which isn't CMOS either. Though wikipedia
tells me there is a '65C02' which is a CMOS version of the 6502.

------
BurnGpuBurn
Are there any Risc-V designs that would plug in to this?

~~~
cottonseed
Yes, there are lots of open-source RISC-V cores. Tim Edwards of efabless has
another talk about creating a RISC-V based ASIC SOC:
[https://www.youtube.com/watch?v=EsEcLZc0RO8](https://www.youtube.com/watch?v=EsEcLZc0RO8)
based on PicoRV:
[https://github.com/cliffordwolf/picorv32](https://github.com/cliffordwolf/picorv32).
PicoRV is part of the efabless IP offerings. The chips will have a PicoRV
harness on them.

~~~
BurnGpuBurn
Thanks!

------
fouc
What are the chances that google will add their own hidden or proprietary
circuitry to any open-source chips? They'll add all sorts of "Security" and
"Tracking" features..

~~~
pjc50
If you hand them GDSII then fiddling with it is very time-consuming and
difficult, but can be spotted by looking at the resulting chip under a
microscope.

(Not entirely simple at 130nm as this is shorter than the wavelength of
visible light!)

~~~
imtringued
You don't need to look at the smallest features of a transistor to notice that
the chip has 30% more transistors than your original design.

~~~
Symmetry
Also, adding something like that would be an incredible amount of work.
Basically you would have to totally re-do the layout even if you're just
adding a macro somewhere and totally re-design it if you're not going to end
up causing a drastic decrease in max clock rate. That's for an management
engine or tracking style thing. A backdoor that makes #5F0A40A3 equal to every
other number for password bypass wouldn't be that invasive and might only slow
things down by a little bit so I guess that's a possibility if a certain
design becomes really popular?

