
Ask HN: Where do I get started on ASICs, FPGA, RTL, Verilog et. al? - bharatkhatri14
I actually want to understand the chip manufacturing process - design, prototyping (using FPGAs, etc), baking process in foundries. And also at least a basic understanding of how IP is managed in chip industry - like &quot;IP core&quot; is a term that I frequently hear but due to the myriad interpretations available online, I don&#x27;t really understand what an &quot;IP core&quot; really means. Hoping to get useful advice from veterans in the chip industry.
======
jleahy
I wouldn't get too hung up on the phrase 'IP core', it's basically the
equivalent of a software library. A reusable chunk of silicon or verilog.

If you want to know about how chips are made then I'd highly recommend the
book "CMOS Circuit Design and Simulation" by Baker. It's starts off telling
you how silicon is etched to make chips, then goes through how MOSFETs work
and how to simulate them using SPICE. By the time you're half way through the
book you'll know how a static CMOS logic gate works (down to the electrons).

If you'd rather learn something that you'll be able to apply yourself (without
building a chip fab) then the place to start is Verilog (or VHDL). asic-
world.com has some good tutorials. You can simulate what you've written using
Icarus verilog and look at the results using GTKwave. If it works in
simulation and you want to put it onto a real FPGA it's then just a matter of
fighting the Xilinx/Altera/Lattice tools until then give you a bitstream. If
you have enough money (a lot) you could even get a physical ASIC manufactured.

~~~
pslam
> If you want to know about how chips are made then I'd highly recommend the
> book "CMOS Circuit Design and Simulation" by Baker.

This is a ~$120 book. Do you (or anyone else) have a recommendation for
something a little easier to tell hobbyists they should get?

~~~
AlfonsoP
The book PDF is here:
[https://www.u-cursos.cl/usuario/9553d43f5ccbf1cca06cc02562b4...](https://www.u-cursos.cl/usuario/9553d43f5ccbf1cca06cc02562b4005e/mi_blog/r/CMOS_Circuit_Design__Layout__and_Simulation__3rd_Edition.pdf)

Or buy a used second edition at $20: [https://www.amazon.com/gp/offer-
listing/047170055X/](https://www.amazon.com/gp/offer-listing/047170055X/)

------
zoenolan
Nand 2 Tetris[1] is a good starting place. You should get a good view of how
the different level interact. Coursera have two cources [2][3] that cover the
same material as the book

[1] [http://nand2tetris.org/](http://nand2tetris.org/) [2]
[https://www.coursera.org/learn/build-a-
computer](https://www.coursera.org/learn/build-a-computer) [3]
[https://www.coursera.org/learn/nand2tetris2](https://www.coursera.org/learn/nand2tetris2)

~~~
indigochill
I second this course. It's awesome. The next step from there is probably
Coursera's VLSI course, starting with [https://www.coursera.org/learn/vlsi-
cad-logic](https://www.coursera.org/learn/vlsi-cad-logic). It's all about how
real-world VLSI CADs work.

~~~
fstephany
Thanks! I finished Nand2Tetris and was wondering where to look for a good next
step.

------
j_s
Here's my collection of discussions with recommendations that I've saved just
in case I decide to take a single step in this direction someday:

Open Source Needs FPGAs; FPGAs Need an On-Ramp |
[https://news.ycombinator.com/item?id=14008444](https://news.ycombinator.com/item?id=14008444)
(Apr 2017)

GRVI Phalanx joins The Kilocore Club |
[https://news.ycombinator.com/item?id=13448166](https://news.ycombinator.com/item?id=13448166)
(Jan 2017)

What It Takes to Build True FPGA as a Service |
[https://news.ycombinator.com/item?id=13153893](https://news.ycombinator.com/item?id=13153893)
(Dec 2016)

Low-Power $5 FPGA Module |
[https://news.ycombinator.com/item?id=9863475](https://news.ycombinator.com/item?id=9863475)
(Jul 2015)

------
sandGorgon
Its quite late here, so I'll be brief. You mentioned the words "how IP is
managed in chip industry" \- so I'm going to move past the bookish knowledge
and the tutorials and the open source code.

The chip design and EDA industry are very closeted and niche - there is so
much knowledge there that is not part of any manual.

For example for a newcomer, you wouldn't even know what testing and validation
in chip design would be - or how formal verification is an essential part of
testing.

You wouldn't know what synthesis is, what is place and route, and GDS masks
for foundries.

There is seriously no place to learn this. The web design or the AI world
works very different - you can be a very productive engineer through Udacity.
Not with ASIC.

You need to find a job in the chip design or EDA industry. There is seriously
no other way.

If I had to make a wild parallel - the only other industry that works like
this are people who make compilers for a living. Same technology, same
problems, similar testing steps I guess.

~~~
tinco
I think your compilers example doesn't fit. They are actually a rather
straightforward thing to build, many undergraduates build one as part of their
studies, some hobbyists build compilers that get used by thousands of people
and in fortune 500 companies core infrastructure. There's little rigor
involved.

Among the few software systems that need rigor are control systems for
physical installations and trading/finance systems for example.

~~~
nostrademons
Also many production-grade compilers (GCC/G++, Clang, OpenJDK, V8, and almost
every new language that's come out since the 90s) are open-source. You can go
read the commit logs & source code to see how they work, if you're diligent
and willing to slog through them. There are certainly tricks that professional
compiler writers use that aren't covered in textbooks (the big ones center
around error-reporting, incremental compilation, fancy GC algorithms, and
certain optimizations), but you can always go consult the source to learn
about them.

I thought the thread was really about domains where the bulk of knowledge is
locked up in industry rather than being about rigor, but I'd put control
systems in that category as well. Also information retrieval (Google's search
algorithms are about 2 decades ahead of the academic state-of-the-art...the
folks at Bing/A9/Facebook know them too, but you aren't going to find them on
the web), robotics, and aerospace.

~~~
sandGorgon
im generally talking intel c++ compilers, etc. When I was doing EDA, we used
to fork out a lot of cash for these compilers and these guys used to work very
closely with us to optimize for certain kinds of code styles - for example
loop unrolling on HP-UX, etc. I dont know if this is still a thing, so i might
be mistaken.

------
gluggymug
As a veteran from the chip industry, I should warn you that all these
suggestions about FPGAs for prototyping are not really done that much in the
ASIC industry.

The skills to do front end work are similar but an ASIC design flow generally
doesn't use an FPGA to prototype. They are considered slow to work with and
not cost effective.

IP cores in ASICs come in a range of formats. "Soft IP" means the IP is not
physically synthesised for you. "Hard IP" means it has been. The implications
are massive for all the back end work. Once the IP is Hard, I am restricted in
how the IP is tested, clocked, resetted and powered.

For front end work, IP cores can be represented by cycle accurate models.
These are just for simulation. During synthesis you use a gate level model.

~~~
pslam
As a veteran from the chip industry, I can tell you my experience is
completely the opposite.

Nobody in their right mind would produce an ASIC without going through
simulation as a form of validation. For anything non-trivial, that means FPGA.

~~~
gluggymug
I don't agree. If it's non trivial, I don't have the more advanced
verification tools such as UVM if I prototype via FPGA.

The ability to perform constrained randomised verification is only workable
via UVM or something like it. For large designs that is arguably the best
verification methodology. Without visibility through the design to observe and
record the possible corner cases of transactions, you can't be assured of
functional coverage.

While FPGAs can run a lot more transactions, the ability to observe coverage
of them is limited.

I have worked on multiple SoCs for Qualcomm, Canon and Freescale. FPGAs don't
play a role in any SoC verification that I've worked on.

~~~
spear
That's a false dichotomy -- you can do FPGA verification in addition to
simulation-based verification. And yes, there are ASIC teams that have
successfully done that.

~~~
gluggymug
At the SoC level, I don't think so.

The reasons are numerous. I already gave a few. I will give another. Once you
have to integrate hard IP from other parties, you cannot synthesise it to
FPGA. Which means you won't be able to run any FPGA verification with that IP
in the design. You can get a behavioural model that works in simulation only.
In fact it is usually a requirement for Hard IP to be delivered with a cycle
accurate model for simulation.

I'll give another reason. If you are verifying on FPGA you will be running a
lot faster than simulation. The Design Under Test requires test stimulus at
the speed of the FPGA. That mans you have to generate that stimulus at speed
and then check all the outputs of the design against expected behaviour at
speed. This means you have to create additional HW to form the testbench
around the design. This is a lot of additional work to gain speed of
verification. This work is not reusable once the design is synthesised for
ASIC.

I can go on and on about this stuff. Maybe there are reasons for a particular
product but I am talking about general ASIC SoC work. I got nothing against
FPGAs. I am working on FPGAs right now. But real ASIC work uses simulation
first and foremost. It is a dominant part of the design flow and FPGA
validation just isn't. On a "Ask HN", you would be leading a newbie the wrong
way to point to FPGAs. It is not done a lot.

~~~
TomVDB
As another veteran in the ASIC industry: we are using FPGAs to verify billion
transistor SOCs before taping out, using PCBs that have 20 or more of the
largest Xilinx or Altera FPGAs.

It's almost pointless to make the FPGA run the same tests as in simulation.
What you really want is to run things that you could never run in simulation.
For example: boot up the SOC until you see an Android login screen on your LCD
panel.

A chip will simply not tape out before these kind of milestones have been met,
and, yes, bugs have been found and fixed by doing this.

The hard macro IP 'problem' can be solved by using an FPGA equivalent. Who
cares that, say, a memory controller isn't 100% cycle accurate? It's not as if
that makes it any less useful in feeding the units that simply need data.

------
ChuckMcM
There are three things here that you've intertwined.

 _Process_ \-- This is the science of creating circuits on silicon wafers
using lithography, etching, and doping. There is a large body of knowledge
around the physics involved here. Materials science and Physics and Silicon
Fabrication are all good places to start.

 _Chip Design_ \-- This is creating circuits which are run through a tool that
can lay them out for you automatically. HDLs teach you to describe the
semantics of the hardware in such a way that a tool can infer actual circuits.
Generally a solid understanding of digital logic design is a pre-requisite,
and then you can learn the more intimate details of timing closure, or floor
planning, signal propagation and tradeoffs of density and speed.

 _IP_ \-- Clearly all of the intellectual property law is a huge body but most
of the IP around chips is patent law (how the chips are made) and copyright
law (how they are laid out).

~~~
mcshicks
Yes that's true, but there's actually even more like packaging, testing, etc.
I took a free online course from Stanford called "nanomanufacturing" but it
really was mostly about about chip manufacturing, packaging etc. Even though I
worked in the semiconductor industry for 12 years (mostly bench testing
preproduction ASICS) I still found it really useful. Not sure if you can still
view the archives here if you sign up for an account (I can but I took the
class)

[https://lagunita.stanford.edu/courses/Engineering/Nano/Summe...](https://lagunita.stanford.edu/courses/Engineering/Nano/Summer2014/info)

No substitute for learning the physics, but at least it kind of gives you some
idea of what's involved. In addition to all the crazy technology involved in
fabricating the chips, the packaging technology has gotten really
sophisticated. It can be very confusing about what's the difference BGA,
WLCSP, stacked dies, etc. Anyway the course covered a lot different types of
processing with examples.

~~~
ChuckMcM
That is awesome. The link doesn't work for me but I didn't really expect it
to. My first job in the Bay Area was working for Intel and about 6 months in I
was offered some 'counterfeit' or grey market Intel DRAM chips. (as a
microcomputer enthusiast, not as an Intel employee) I took the offer to
security, who gave me the cash to buy a tube of them, which I did, and they
disassembled them to figure out where in the packaging pipeline they had gone
missing.

Sadly I never got to hear the full story on how they came to be but I did get
a good look at the packaging pipeline that Intel used at the time. It was
extensive even then with half a dozen entities providing steps in the path.

------
CalChris
An ASIC is pretty expensive unless you've got Google money [1]. Start with an
FPGA dev board [2] and probably just stick with FPGA. Hell, Amazon has an FPGA
instance [3]:

[1] [https://electronics.stackexchange.com/questions/7042/how-
muc...](https://electronics.stackexchange.com/questions/7042/how-much-does-it-
cost-to-have-a-custom-asic-made)

[2]
[https://www.sparkfun.com/products/11953](https://www.sparkfun.com/products/11953)

[3] [https://aws.amazon.com/ec2/instance-
types/f1/](https://aws.amazon.com/ec2/instance-types/f1/)

~~~
jleahy
You can actually get an ASIC manufactured for a few thousand dollars via CMP
or Europractice. So not quite Google money. The difficulty is in paying for
the software licenses you need to go from Verilog to DRC checked GDSII files
(which is what you need to send to them).

In fact personally I think this is a much better route for open source
hardware. Reverse engineering FPGA bitstreams impressive, but you're swimming
against the tide. If we had good open source tooling for synthesis/place-and-
route/DRC checking and good open source standard cell libraries (and these
things exist, eg. qflow, they're just not amazing currently) then truly open
source hardware could actually be a thing. Even on 10 year old fabs you'll get
much better than you could do on an FPGA (you just have to get it right first
time).

~~~
exikyut
I'm very interested in this. I read that one small design (which I knew was
very small, but not quantitatively so) cost approximately $5k per small run.

How is the cost calculated? I presume size of final wafer (ie, number of chips
produced) at least; does transistor count per chip influence anything too?

Finally, is it _possible_ to produce and maintain a fully open-source design
that's the chip-fab equivalent of the book publishing industry's "camera-ready
copy"? I get the idea that this is specifically where things aren't 100% yet,
but, using entirely open tools, can you make something that is usable?

~~~
jleahy
The cost depends on the technology. More modern processes are dramatically
more expensive (don't even think about something like 28nm). If you're looking
at something ancient like 130nm, 0.35um, etc then it's going to be something
like $1-2k per mm^2. Normally there's a minimum cost of a few mm^2. Expect to
get a few tens of chips back. Transistor count has no effect, they couldn't
care less what you put inside your part of the wafer, so long as it follows
the design rules (for example minimum % coverage on each layer to prevent it
collapsing vertically), but obviously if you need more transistors you might
need more area.

Yes it's 100% possible to do an open-source design, qflow has been used to
make sub-circuits of ASICs, but it's going to be extremely difficult. There
are lots of things missing which you'd have to take from the fabs PDK or
design yourself, for example open source I/O pads (sounds boring, but actually
lots of work with ESD, etc). Combined with huge missing feature-sets in the
open source tools, like extraction of designs back to SPICE circuits with
parasitics and complete DRC checking, you're not going to have a fun time.

~~~
exikyut
Hmm, I see. Thanks for this info, and particularly the bit about the pricing.

------
Veratyr
> And also at least a basic understanding of how IP is managed in chip
> industry - like "IP core" is a term that I frequently hear but due to the
> myriad interpretations available online, I don't really understand what an
> "IP core" really means.

I'm by no means a veteran but my understanding is that "IP core" refers to a
design you buy from someone else. Say you want a video encoder on your smart
fridge SoC. You can either spend a whole lot of time, manpower and money
developing one yourself or you can license the design from someone else who
already has one and just dump it in.

You'd only do this when you want to integrate the design into your own (likely
mass manufactured) chip. You can also often buy a packaged chip that serves
the same function for much less but doing that is a tradeoff. You can do it at
very low volume and cost but you potentially lose a bunch of efficiency in
terms of space and power.

~~~
CyberFonic
That is my understanding as well. In a commercial setting, reinventing the
wheel is economically a bad idea. For the company licensing the IP core, the
licence revenues are another form of return on investment for the design
effort. Companies, like ARM, are "fab-less", i.e. they create IP cores and
license them to semi manufacturers.

------
exikyut
I've gotten curious about FPGAs myself of late, particularly with video
capture. A hopefully-on-topic question of my own, if I may:

I've seen that some FPGA boards have HDMI transceivers that will decode TMDS
and get the frame data into the FPGA somehow. That got me thinking about
various possibilities.

\- I want to build a video capture device that will a) accept a number of TMDS
(HDMI, DVI, DisplayPort) and VGA signals (say, 8 or 10 or so inputs, 4-5 of
each), simultaneously decode all of them to their own framebuffers, and then
let me pick the framebuffer to show on a single HDMI output. This would let me
build a video switcher that could flip between channels with a) no delay and
b) no annoying resyncs and c) because everything's on independent framebuffers
I can compensate for resolution differences (eg, a 1280x1024 input on the
1920x1080 output) via eg centering.

\- In addition to the above, I also want to build something that can actually
_capture_ from the inputs. It's kind of obvious that the only way to be able
to do this is via recording to a circular window in some onboard DDR3 or DDR4
(256GB would hold 76 seconds of 4K @ 144fps). My problem is actually
_dumping_/saving the data in a fast way so I can capture more input.

I can see two ways to build this

1) a dedicated FPGA board with onboard DDR3, a bunch of SATA controllers and
something that implements the equivalent of RAID striping so I can parallelize
my disk writes across 5 or 10 SSDs and dump fast enough.

2) A series of FPGA cards, each which handles say 2 or 3 inputs, and which
uses PCI bus mastering to write directly into a host system's RAM. That solves
the storage problem, and would probably simplify each card. I'd need a fairly
beefy base system, though; 4K 144fps is 26GB/s, which is uncomfortably close
to PCI-e 3.0 x16's limit of 32GB/s.

I'll admit that this mostly falls under "would be really really awesome to
have"; I don't have a commercial purpose for this yet, just to clarify that.
That said, my inspiration is capturing pixel-perfect, perfectly-timed output
from video cards to identify display and rendering glitches (particularly
chronologically-bound stuff, like dropped frames) in software design, so
there's probably a market for something like this in UX research somewhere...

~~~
pjc50
The capture thing appears to exist:
[https://www.blackmagicdesign.com/products/hyperdeckshuttle/](https://www.blackmagicdesign.com/products/hyperdeckshuttle/)
\- presumably it applies lossless compression. Lossless encoding within the
h264 container should be possible.

~~~
mschuster91
That Blackmagic stuff is really awesome. Expensive as ..., but worth every
penny.

~~~
j_s
Pardon my ignorance, but I thought Blackmagic's claim to fame was always being
on the low end of the pricing for their tech.

------
orbifold
Regarding the ASIC part: There are several good courses online that explain
the whole process from hardware description in a hardware description language
to GDSII file, which you could send to a foundry in some detail. See for
example this course
[https://web.csl.cornell.edu/courses/ece5745/](https://web.csl.cornell.edu/courses/ece5745/),
there are also very good courses available from Berkeley and MIT. What is
usually missing is the more gory details of the backend flow, which can become
very involved and complicated depending on your design and process.

Unfortunately someone who has no access to the EDA tools by cadence / synopsis
or standard library files from foundries, cannot really follow along all that
far, you are limited to working at the RTL level.

There are several good open source RTL simulators available, I have personally
used mostly verilator, which supports (almost) all synthesizable constructs of
system verilog and has performance close to the best commercial simulators
like vcs. It compiles your design to C++ code which you can then wrap in any
way you like.

You should also check out
[https://chisel.eecs.berkeley.edu/](https://chisel.eecs.berkeley.edu/), which
is a hardware description language embedded in scala, the nice thing about it
is that it has a relatively large number of high quality open source examples,
designs ([https://github.com/freechipsproject/rocket-
chip](https://github.com/freechipsproject/rocket-chip)) and a library of
standard components available, something which can't really be said of verilog
/ vhdl unfortunately. As an added bonus you can actually use IntelliJ as an
IDE, which blows any of the commercial IDEs available for system verilog or
vhdl out of the water.

Another thing I can recommend is to get yourself a cheap FPGA board, some of
them are programmable purely with open source tools, see
[http://www.clifford.at/icestorm/](http://www.clifford.at/icestorm/).
Alternatively the Arty Devkit comes with a license for the Xilinx Vivado
toolchain.

~~~
tmccrmck
I wouldn't start with Chisel as there's a lack of good documentation online.
When I used it for a Berkeley class, sometimes you would feel like you hit a
wall. Verilog or SystemVerilog will have much more in the way of stack
overflow type of documentation.

------
source99
For what its worth a lot time working towards Carnegie Mellon's undergraduate
(and graduate) degree in computer engineering revolved around verilog and
FPGAs and ASICs. After learning all sorts of principles and design skills we
learned verilog so we can actually build decent size projects and simulate
them. Then we would build real world projects with an FPGA and then the
advances classes has us designing and simulating ASICs. Then the really
advanced classes has us studying manufacturing ASICs.

------
EvanAnderson
I wanted to learn about Verilog development and to get a better understanding
of what's happening on chips. To that end I bought a MiST FPGA-based computer:
[https://github.com/mist-devel/mist-board/wiki](https://github.com/mist-
devel/mist-board/wiki)

It's Altera Cyclone III-based. The MiST wiki says the free "web edition" of
the Altera "Quartus II" development environment is sufficient to develop for
the unit (albeit I haven't actually gotten around to doing anything with it
yet).

I can't say how the MiST board stacks-up to dev boards from FPGA
manufacturers. I may be going about this the most wrong way possible, but here
was my rationale: I was attracted to MiST because tutorials were available for
it ([https://github.com/mist-devel/mist-
board/tree/master/tutoria...](https://github.com/mist-devel/mist-
board/tree/master/tutorials)), and because the device could be usable as a
retro-computing platform if it ended up being unusable for me for anything
else. (Chalk that up to rationalization of the purchase, I guess.)

------
flying_sheep
First of all you must understand logic gates, flip-flops (D-FF, T-FF and so
on) and multiplexers. All of them are built on logic gates. To verify that you
really understand the concept, try to implement a digital clock. (This one:
[https://sapling-inc.com/wp-content/gallery/digital-clocks-
fo...](https://sapling-inc.com/wp-content/gallery/digital-clocks-for-sbl-sbt-
sbw/Sapling-404-Wall-Mount-White-1230-H.jpg))

Logic gates are essential of learning digital circuits. After you understand
logic gates, you can use it to build many things that are really related to
the application. There are many tools to verify the logic gates works as
designed.

Then based on the project requirement ($$, time, performance, ...), you can
choose to use FPGA or ASIC to implement logic gates. FPGA use array of logic
gates while ASIC uses CMOS to implement the logic gates. FPGA is easier to
learn and much cheaper. You can buy some development board which costs only
several hundred dollars. While ASIC needs much domain knowledge and people
involved. ASIC needs you to understand the electronics in order to build
something that is useful. You need to understand how the CMOS are implemented
(= how semiconductor becomes conductive), how the resistance and capacitance
affect the performance, the number of wafer layout which affects the cable
layout and more. And don't forget manufacturing can introduce defeats which
cause the IC to malfunction in unexpected ways. Each step in ASIC needs a
specialist for them

~~~
forg0t_username
For 5 seconds I was like "but the clock is the analog part of the circuit,
this does not make any sense". Then I clicked the link.

------
Figs
You may find the book _Contemporary Logic Design_ by Katz and Borriello to be
interesting. Its what we used in my college digital logic class.

For my computer architecture class (i.e. the class where we learn how to
design basic CPUs -- and ultimately implemented one in Verilog), we used
_Computer Organization and Design_ by Patterson and Hennessy. That might also
be of interest.

------
jwatte
You can start by buying a Papillon board and going through the free range VHDL
book. That's the high level.

When you want to understand CMOS and integrated circuits, you need some
electronics experimenter kit, and a lot of practice in ohms law. Then read up
on multiple gate transistors and (here my experience stops) lithography and
small scale challenges (tunneling loss is a thing, I suppose?)

Of course, "ip core" can mean different things, might be some Verilog source,
might be some netlists, might be a hard macro for a particular process. You
really need to work with it to get the specifics. (Subscribing to EETimes,
going to trade shows, and otherwise keeping up might help)

But at the end of the day, you're asking "how can I become and experienced
ASIC engineer," and the truth is that it takes time, education, and
dedication.

~~~
duskwuff
The board you're thinking of is the Papilio. The Papillon is a dog breed. :)

------
peterburkimsher
I studied an MEng in Electronic Systems Engineering, and really enjoyed the
courses in IC design.

However, I couldn't find a chip-design job in a country other than the US or
UK that wasn't related to military applications.

Now I work in Taiwan, and I see the chips being made! But my work is related
to control systems for the testing equipment, which is software instead of
hardware design.

I got a Virtex-II FPGA board from a recycling bin, and I wanted to find a good
personal project for it. Even now, I'm at a loss for ideas. I can do
everything I need with a Raspberry Pi already.

Please can someone suggest some good projects I could only do with an FPGA?

~~~
ktta
You could see what the folks at Hackaday.io are doing.

[https://hackaday.io/list/3746-programmable-logic-
projects](https://hackaday.io/list/3746-programmable-logic-projects)

------
dsc_
Opencores.org has a very large collection of opensource IP cores.

------
igk
[http://www.clifford.at/icestorm/](http://www.clifford.at/icestorm/)

open source FPGA workflow. ASICs is more tough

------
jsolson
I'd start with learning a hardware description language and describing some
hardware. Get started with Verilog itself. I'm a fan of the [Embedded Micro
tutorials]([https://embeddedmicro.com/tutorials/mojo](https://embeddedmicro.com/tutorials/mojo))
-- see the links under __Verilog __Tutorials on the left (they 're also
building their own HDL, which unless you own a Mojo board isn't likely of
interest). Install Icarus Verilog and run through the tutorials making sure
you can build things that compile. Once you get to test benches, install
Gtkwave and look at how your hardware behaves over time.

You can think of "IP cores" as bundled up (often encrypted/obfuscated) chunks
of Verilog or VHDL that you can license/purchase. Modern tools for FPGAs and
ASICs allow integrating these (often visually) by tying wires together -- in
practice you can typically also just write some Verilog to do this (this will
be obvious if you play around with an HDL enough to get to modular design).

Just writing and simulating some Verilog doesn't really give you an
appreciation for hardware, though, particularly as Verilog can be not-
particularly-neatly divided into things that can be _synthesized_ and things
that can't, which means it's possible to write Verilog that (seems to)
simulate just fine but gets optimized away into nothing when you try to put it
on an FPGA (usually because you got some reset or clocking condition wrong, in
my experience). For this I recommend buying an FPGA board and playing with it.
There are several cheap options out there -- I'm a fan of the
[Arty]([http://store.digilentinc.com/arty-a7-artix-7-fpga-
developmen...](http://store.digilentinc.com/arty-a7-artix-7-fpga-development-
board-for-makers-and-hobbyists/)) series from Digilent. These will let you
play with non-trivial designs (including small processors), and they've got
lots of peripherals, roughly Arduino-style.

If you get that far, you'll have discovered that's a lot of tooling, and the
tooling has a lot of options, and there's a lot that it does during synthesis
and implementation that's not _at all_ obvious. Googling around for each of
the phases in the log file helps a lot here, but given what your stated
interest is, you might be interested in the [VLSI: Logic to
Layout]([https://www.coursera.org/learn/vlsi-cad-
logic](https://www.coursera.org/learn/vlsi-cad-logic)) course series on
Coursera. This talks about all of the logic analysis/optimization those tools
are doing, and then in the second course discusses how that translates into
laying out actual hardware.

Once you've covered that ground it becomes a lot easier to talk about FPGAs
versus ASICs and what does/doesn't apply to each of them (FPGAs are more like
EEPROM arrays than gate arrays, and for standard-cell approaches, ASICs look
suspiciously like typesetting with gates you'd recognize from an undergrad
intro-ECE class and then figuring out how to wire all of the right inputs to
all of the right outputs).

Worth noting: getting into ASICs as a hobby is prohibitively expensive. The
tooling that most foundries require starts in the tens-of-thousands-per-seat
range and goes up from there (although if anyone knows a fab that will accept
netlists generated by qflow I'd love to find out about it). An actual
prototype ASIC run once you've gotten to packaging, etc. will be in the
thousands to tens of thousands at large (>120nm) process sizes.

~~~
namibj
If i correctly understand what you are saying, it cold pe possible to make a
custom small chip for doing some crypto capable of a little more than what
those smartcards offer for under 10k$? That would be awesome, from a trust
perspective, at least if you could realistically compare the chip you get back
with what you know to expect, using an electron microscope.

~~~
jsolson
Not for an ASIC without spending a LOT on tooling, and really $10k is awfully
optimistic even if you had all of that tooling (I probably should've just said
tens of thousands).

For <100k, yes, you can absolutely do a small run in that range.

Honestly, you might be better off just buying functional ICs (multi-gate
chips, flip flops, shift registers, muxes, etc.) and making a PCB, though.
Most crypto stuff is small enough that you can do a slow/iterative solution in
fairly small gate counts plus a little SRAM.

~~~
jwatte
If you do that, why wouldn't you use a FPGA or just a fast CPU?
Microcontrollers and CPUs are blending in performance, and are cheap enough to
plop on a board and call it done for many applications.

~~~
jsolson
Sure, but if really want to avoid trusting trust (and you're of the mind to
build your own hardware), FPGAs and µcs offer a lot of room for snooping.

Given the GPs suggested use, it seemed trusting trust was not on the table.

Certainly even a tiny FPGA can fit pretty naïve versions of common crypto
primitives, as can any modern micro-controller. Assuming you only need to do a
handful of ops for whatever you're looking to assert/verify, that is by far
simpler than building a gate-level representation :)

~~~
namibj
I was thinking about a chip with only sram for secret storage that could be
bundled into a ID-1 sized card with some small energy storage for the sram
(there are affordable .5mm LiPo Cells that fit inside such a card), and then
use the card to fit some display capable of giving some little data out, as
well as a touch matrix,possibly by just using a style similar to carbon-
contacts on cheap rubber membrane keyboards, but gold plated like the
smartcard interface. But it seems like you can't afford to store one
decompressed ed25519 or dare rsa, so the idea is moot by virtue of requiring
sub-100nm technology to fit at least some sram.

------
rasz
You can start at the beginning of VLSI revolution and read from the horses
mouth 'Introduction to VLSI Systems'. If you are still serious about it get
into MITs EECS.

Btw you ask about 4 almost totally separate areas(RTL UVM TB etc), only
managers/execs/veterans/architects know the whole process from raw silicon to
packaging.

------
owenfi
Shameless friend-promotion: [http://tinyfpga.com](http://tinyfpga.com)

I spoke to the creator today and he's planning a tutorial/example IP series -
probably open to suggestions if there's anything you're particularly
interested in.

------
periya
Check out EDA playground. You can run verilog on their web interface and can
bring up waveforms from simulations.

[https://www.youtube.com/user/edaplayground](https://www.youtube.com/user/edaplayground)

------
BrooklynRage
You just described a few different sub-fields of computer engineering:

1\. Processes, which involves lots of materials science, chemistry, and low-
level physics. This involves the manufacturing process, as well as the low-
level work of designing individual transistors. This is a huge field.

2\. Electrical circuits. These engineers use specifications given to you by
the foundry (transistor sizes, electrical conductance, etc.) and using them to
create circuit schematics and physically laying out the chip in CAD. Once you
finish, you send the CAD file to the group from #1 to be manufactured. Modern
digital designs have so many transistors that they have to be laid out
algorithmically, so engineers spend lots of time creating layout algorithms
(called VLSI).

3\. Digital design. This encompasses writing SystemVerilog/VHDL to specify
registers, ALUs, memory, pipelining etc. and simulating it to make sure it is
correct. They turn the dumb circuit elements into smart machines.

It's worth noting that each of the groups primarily deals with the others
through abstractions (Group 1 sends a list of specifications to group 2, Group
3 is given a maximum chip area / clock frequency by group 2), so it is
possible to learn them fairly independently. Even professionals tend to have
pretty shallow knowledges of the other steps of the process since the field is
so huge.

I'm not super experienced with process design, so I'll defer to others in this
thread for learning tips.

To get started in #2, the definitive book is the Art of Electronics by
Horowitz & Hill. Can't recommend it enough, and most EEs have a copy on their
desk. It's also a great beginner's book. You can learn a lot by experimenting
with discrete components, and a decent home lab setup will cost you $100.
Sparkfun/Adafruit are also great resources. For VLSI, I'd recommend this
coursera course: [https://www.coursera.org/learn/vlsi-cad-
logic](https://www.coursera.org/learn/vlsi-cad-logic)

To learn #3, the best way is to get a FPGA and start implementing increasingly
complicated designs, e.g. basic logic gates --> counters --> hardware
implementation of arcade games. This one from Adafruit is good to start:
[https://www.adafruit.com/product/451?gclid=EAIaIQobChMIhKPax...](https://www.adafruit.com/product/451?gclid=EAIaIQobChMIhKPaxajQ1gIVnbbACh1rkgZSEAQYASABEgKKcvD_BwE),
though if you want to make games you'll need to pick up one with a VGA port.

Silicon design & manufacturing is super complicated, and I still think that
it's pretty amazing that we're able to pull it off. Good luck with learning!

(Source: TA'd a verilog class in college, now work as an electrical engineer)

~~~
amelius
There's also the field of "Computer Architecture", which is a level above the
ones you've described.

------
nimish
Get yourself an FPGA devkit and start making little hardware bits

~~~
kregasaurusrex
What's the best way for a novice to determine what their needs are? One thing
that's been a barrier to entry for me has been fear of vendor lock-in from
point of sale with regards to upgrading in the future; ie: purchasing a
starter kit by company X but once determining that only company Y supports the
feature set. Where there would then require a non-trivial amount devtime would
be sunk in re-tooling your code both across hardware and design environments.
I was originally really excited to see AWS hosting FPGA instances, but a
friend told me that they were charging a heavy premium and only had a limited
number of manufacturers.

~~~
jsolson
There are basically two companies in this business, Altera (Intel) and Xilinx.

I would not worry about vendor lock-in for now -- there are some quite
affordable dev boards (like the Arty
([http://store.digilentinc.com/arty-a7-artix-7-fpga-
developmen...](http://store.digilentinc.com/arty-a7-artix-7-fpga-development-
board-for-makers-and-hobbyists/)) I've mentioned elsewhere), and no matter
what you pick there's a ton of tooling. The concepts from the tools will
translate between vendors, though, even if the commands and exact flows
change.

~~~
carussell
To add to this:

Xcell is the name of Xilinx's self-published journal(s), and it's free. Here's
the Xcell portal:

[http://www.xilinx.com/about/xcell-
publications.html](http://www.xilinx.com/about/xcell-publications.html)

Here's a short article published in Xcell by Niklaus Wirth after he designed
and prototyped a RISC CPU on a Xilinx FPGA:

[https://issuu.com/xcelljournal/docs/xcell_journal_issue_91/3...](https://issuu.com/xcelljournal/docs/xcell_journal_issue_91/30)

And finally, it's worth noting that the most recent post on the Xcell blog is
one titled "Found! A great introduction to FPGAs", which was written just a
couple days ago and is a glowing recommendation for the textbook "Digital
System Design with FPGA: Implementation Using Verilog and VHDL".

[https://forums.xilinx.com/t5/Xcell-Daily-Blog/Found-A-
great-...](https://forums.xilinx.com/t5/Xcell-Daily-Blog/Found-A-great-
introduction-to-FPGAs-Digital-System-Design-with/ba-p/797803)

------
TheGrassyKnoll

      This might help you:
      MOSIS Integrated Circuit Fabrication Service 
    

[https://www.mosis.com](https://www.mosis.com)

------
deepnotderp
Okay, okay, hang on now, how a chip is manufactured and the design flow are
really two different things.

If you could tell us your background it might be helpful to get started.

------
mozumder
This is a huge topic. You can spend your entire career in just each part of
your question - fabrication in a foundry, FPGA synthesis, HDL design, ASIC
place and route, etc..

I've actually done all-of-the-above, from making SOI wafers to analog circuit
design for CMOS image sensors to satellite network simulation in FPGAs to
supercomputer architecture and design to XBox GPU place-and-route.

It will honestly take you at least a few years to be able to understand all of
this, and I can't even begin to tell you where to begin.

My track started with semiconductor fabrication processing in college - lots
of chemistry, lithography, physics, design-of-experiments, etc.. I guess
that's as good a start as any. But before that I did get into computer
architecture in high-school, so that game me some reference goals.

What are you ultimately trying to do? Get a job at a fab? That's a lot of
chemistry and science. Do you want to design state-of-the-art chips, as your
IP-core question hints at? That's largely an EE degree. Do you want to build a
cheap kickstarter product, as your FPGA question suggest? That's EE and
Computer Engineering as well.

