Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Where do I get started on ASICs, FPGA, RTL, Verilog et. al?
267 points by bharatkhatri14 on Oct 1, 2017 | hide | past | favorite | 109 comments
I actually want to understand the chip manufacturing process - design, prototyping (using FPGAs, etc), baking process in foundries. And also at least a basic understanding of how IP is managed in chip industry - like "IP core" is a term that I frequently hear but due to the myriad interpretations available online, I don't really understand what an "IP core" really means. Hoping to get useful advice from veterans in the chip industry.



I wouldn't get too hung up on the phrase 'IP core', it's basically the equivalent of a software library. A reusable chunk of silicon or verilog.

If you want to know about how chips are made then I'd highly recommend the book "CMOS Circuit Design and Simulation" by Baker. It's starts off telling you how silicon is etched to make chips, then goes through how MOSFETs work and how to simulate them using SPICE. By the time you're half way through the book you'll know how a static CMOS logic gate works (down to the electrons).

If you'd rather learn something that you'll be able to apply yourself (without building a chip fab) then the place to start is Verilog (or VHDL). asic-world.com has some good tutorials. You can simulate what you've written using Icarus verilog and look at the results using GTKwave. If it works in simulation and you want to put it onto a real FPGA it's then just a matter of fighting the Xilinx/Altera/Lattice tools until then give you a bitstream. If you have enough money (a lot) you could even get a physical ASIC manufactured.


Well said on IPCORES. IPCORES are like sw librarians implemented in hardware in order to accelerate the performance. For example you frequently hear cores like JPEG, H264, HEVC and other signal poricessing or enctryption cores. Basically with HW IP cores, the performance is realtime.


> then just a matter of fighting the Xilinx/Altera/Lattice tools until then give you a bitstream

Haha! So true :-) I have to give some credit to Altera tho because after sort of step learning curve you can start generating "IP Cores" with QSys and Tcl scripts to stop clicking on the same icons all day long!


But, if you want to make a high performance netlist, I'd argue you want to know all about logic gates, Boolean math, and all the rest. HDL stands for "hardware description language", and you'll only write good HDL if you know what you're writing looks like in gates.

(Though if you're just making an LED blink when you push a button, you don't need to write good HDL)


Seconded. The way FPGAs work internally is not really how gate-based (dedicated) chips work internally. FPGAs simulate gates with static RAM tables.


> If you want to know about how chips are made then I'd highly recommend the book "CMOS Circuit Design and Simulation" by Baker.

This is a ~$120 book. Do you (or anyone else) have a recommendation for something a little easier to tell hobbyists they should get?



It depends on what sort of asic stuff fyou want to do, meaning digital or analog.

Adanced chip design by kishore Mishra is very good and like 50 bucks last I remember.

If I were you, I would just look at Amazon reviews and buy whatever looks decently rated and is avalible for cheap used.


Look for Indian editions (their students obviously can't afford American-priced books), these usually have a marker on the cover saying it's not to be distributed outside Indian subcontinent, but it stops no one.


Nand 2 Tetris[1] is a good starting place. You should get a good view of how the different level interact. Coursera have two cources [2][3] that cover the same material as the book

[1] http://nand2tetris.org/ [2] https://www.coursera.org/learn/build-a-computer [3] https://www.coursera.org/learn/nand2tetris2


I second this course. It's awesome. The next step from there is probably Coursera's VLSI course, starting with https://www.coursera.org/learn/vlsi-cad-logic. It's all about how real-world VLSI CADs work.


Thanks! I finished Nand2Tetris and was wondering where to look for a good next step.


Here's my collection of discussions with recommendations that I've saved just in case I decide to take a single step in this direction someday:

Open Source Needs FPGAs; FPGAs Need an On-Ramp | https://news.ycombinator.com/item?id=14008444 (Apr 2017)

GRVI Phalanx joins The Kilocore Club | https://news.ycombinator.com/item?id=13448166 (Jan 2017)

What It Takes to Build True FPGA as a Service | https://news.ycombinator.com/item?id=13153893 (Dec 2016)

Low-Power $5 FPGA Module | https://news.ycombinator.com/item?id=9863475 (Jul 2015)


Its quite late here, so I'll be brief. You mentioned the words "how IP is managed in chip industry" - so I'm going to move past the bookish knowledge and the tutorials and the open source code.

The chip design and EDA industry are very closeted and niche - there is so much knowledge there that is not part of any manual.

For example for a newcomer, you wouldn't even know what testing and validation in chip design would be - or how formal verification is an essential part of testing.

You wouldn't know what synthesis is, what is place and route, and GDS masks for foundries.

There is seriously no place to learn this. The web design or the AI world works very different - you can be a very productive engineer through Udacity. Not with ASIC.

You need to find a job in the chip design or EDA industry. There is seriously no other way.

If I had to make a wild parallel - the only other industry that works like this are people who make compilers for a living. Same technology, same problems, similar testing steps I guess.


I think your compilers example doesn't fit. They are actually a rather straightforward thing to build, many undergraduates build one as part of their studies, some hobbyists build compilers that get used by thousands of people and in fortune 500 companies core infrastructure. There's little rigor involved.

Among the few software systems that need rigor are control systems for physical installations and trading/finance systems for example.


Also many production-grade compilers (GCC/G++, Clang, OpenJDK, V8, and almost every new language that's come out since the 90s) are open-source. You can go read the commit logs & source code to see how they work, if you're diligent and willing to slog through them. There are certainly tricks that professional compiler writers use that aren't covered in textbooks (the big ones center around error-reporting, incremental compilation, fancy GC algorithms, and certain optimizations), but you can always go consult the source to learn about them.

I thought the thread was really about domains where the bulk of knowledge is locked up in industry rather than being about rigor, but I'd put control systems in that category as well. Also information retrieval (Google's search algorithms are about 2 decades ahead of the academic state-of-the-art...the folks at Bing/A9/Facebook know them too, but you aren't going to find them on the web), robotics, and aerospace.


im generally talking intel c++ compilers, etc. When I was doing EDA, we used to fork out a lot of cash for these compilers and these guys used to work very closely with us to optimize for certain kinds of code styles - for example loop unrolling on HP-UX, etc. I dont know if this is still a thing, so i might be mistaken.


What compilers did you have in mind? I suppose you didn't mean compilers for general purpose languages since most of these are FOSS like C#, Haskell, c++, TypeScript etc.


While I agree that the semiconductor industry is crushingly conservative, it's not necessarily that bad. You can still use many of the "old classics", you just have to remember that instead of 10 design rules there are 10K ;)

In all honesty though, a significant amount of the complexity can be hidden in the eda layers nowadays, and instead of dealing with the er,"tenacious"- synthesis tools, you can use good alternatives like Chisel.


Is that actually viable ? The Synopsy and Cadence tools have had several million man hours put into it.

Does something like Chisel give you competitive apples-to-apples performance ? All the way to GDS?


As a veteran from the chip industry, I should warn you that all these suggestions about FPGAs for prototyping are not really done that much in the ASIC industry.

The skills to do front end work are similar but an ASIC design flow generally doesn't use an FPGA to prototype. They are considered slow to work with and not cost effective.

IP cores in ASICs come in a range of formats. "Soft IP" means the IP is not physically synthesised for you. "Hard IP" means it has been. The implications are massive for all the back end work. Once the IP is Hard, I am restricted in how the IP is tested, clocked, resetted and powered.

For front end work, IP cores can be represented by cycle accurate models. These are just for simulation. During synthesis you use a gate level model.


I've had a different experience to this. I've worked on ASIC's for over ten years and have had experience with nearly all aspects of the design flow (from RTL all the way to GDS2 at one point or another). I've taped out probably 20+ chips (although I've been concentrating on FPGA's for the last three years). Every chip that I've taped out has had extensive FPGA prototyping done on the design. This is in a variety of different areas too (Bluetooth, GPU's, CPU's, video pipelines, etc). You can just get a hell of a lot more cycles through an FPGA prototyping system than you can an RTL sim and when you are spending a lot of money on the ASIC masks, etc you want to have a chance to soak test it first.


My experience agrees with yours. Many big-budget teams use a hardware emulator like the Palladium XP or the similar Synopsis device. Both built from FPGAs.

Hardware emulators are expensive, but a single mask respin at 7, 10, or 16nm is even more expensive.


There is a distinction between hardware emulators and FPGAs. Though hardware emulators such as Palladiums may use FPGAs inside them they don't work the same way in terms of validation. The two tools are very different to use.

See myth 7 here: http://www.electronicdesign.com/eda/11-myths-about-hardware-...


As a veteran from the chip industry, I can tell you my experience is completely the opposite.

Nobody in their right mind would produce an ASIC without going through simulation as a form of validation. For anything non-trivial, that means FPGA.


I don't agree. If it's non trivial, I don't have the more advanced verification tools such as UVM if I prototype via FPGA.

The ability to perform constrained randomised verification is only workable via UVM or something like it. For large designs that is arguably the best verification methodology. Without visibility through the design to observe and record the possible corner cases of transactions, you can't be assured of functional coverage.

While FPGAs can run a lot more transactions, the ability to observe coverage of them is limited.

I have worked on multiple SoCs for Qualcomm, Canon and Freescale. FPGAs don't play a role in any SoC verification that I've worked on.


This was my experience working on SoCs at Broadcom also where we didn't really use FPGAs at all.

But at another employer that did not work on consumer designs, I did use a lot of large FPGAs in final shipped products, and in those cases we did some of our heavy testing and iterating on the real FPGA(s). For example I built a version of the FPGA with pseudo-random data generation to test an interface with another FPGA. When I found a case that failed I could then reproduce it in simulation much more quickly.

That employer also built some ASIC designs and I remember some discussions about using FPGA prototyping for the ASICs to speed up verification or get a first prototype board built faster that would later get redesigned with the final ASIC. I don't know if they ever went down that route but it would not surprise me if they did. These were $20k PCB boards once fully assembled, and integration of the overall system was often a bigger stumbling block than any single digital design.

There are a lot of different hardware design niches so I'm sure there are many other cases.

All my information is also about 10 years out of date.


This reflects my experience. Many/most of the "nontrivial" issues nowadays are rooted in physical issues, not logical issues. And in those cases, simulation is often superior to dealing with the fpga software layer. Fwiw,I asked my co founder formerly at Intel, and he said that fpga involvement was "almost zero".


That's a false dichotomy -- you can do FPGA verification in addition to simulation-based verification. And yes, there are ASIC teams that have successfully done that.


At the SoC level, I don't think so.

The reasons are numerous. I already gave a few. I will give another. Once you have to integrate hard IP from other parties, you cannot synthesise it to FPGA. Which means you won't be able to run any FPGA verification with that IP in the design. You can get a behavioural model that works in simulation only. In fact it is usually a requirement for Hard IP to be delivered with a cycle accurate model for simulation.

I'll give another reason. If you are verifying on FPGA you will be running a lot faster than simulation. The Design Under Test requires test stimulus at the speed of the FPGA. That mans you have to generate that stimulus at speed and then check all the outputs of the design against expected behaviour at speed. This means you have to create additional HW to form the testbench around the design. This is a lot of additional work to gain speed of verification. This work is not reusable once the design is synthesised for ASIC.

I can go on and on about this stuff. Maybe there are reasons for a particular product but I am talking about general ASIC SoC work. I got nothing against FPGAs. I am working on FPGAs right now. But real ASIC work uses simulation first and foremost. It is a dominant part of the design flow and FPGA validation just isn't. On a "Ask HN", you would be leading a newbie the wrong way to point to FPGAs. It is not done a lot.


As another veteran in the ASIC industry: we are using FPGAs to verify billion transistor SOCs before taping out, using PCBs that have 20 or more of the largest Xilinx or Altera FPGAs.

It's almost pointless to make the FPGA run the same tests as in simulation. What you really want is to run things that you could never run in simulation. For example: boot up the SOC until you see an Android login screen on your LCD panel.

A chip will simply not tape out before these kind of milestones have been met, and, yes, bugs have been found and fixed by doing this.

The hard macro IP 'problem' can be solved by using an FPGA equivalent. Who cares that, say, a memory controller isn't 100% cycle accurate? It's not as if that makes it any less useful in feeding the units that simply need data.


I find the above pair of comments really interesting. I'm guessing there are parallels with differences of opinion and approach in other areas of engineering. There are always reasons for the differences, and those are usually rooted in more than just opinion or dogma.

In this case, I'd guess its got a lot to do with cost vs relevance of the simulation. If you're Intel or AMD making a processor, I bet FPGA versions of things are not terribly relevant because it doesn't capture a whole host of physical effects at the bleeding edge. OTOH for simpler designs on older processes, one might get a lot of less formal verification by demonstrating functionality on an FPGA. But this is speculation on my part.


"If you're Intel or AMD making a processor, I bet FPGA versions of things are not terribly relevant because it doesn't capture a whole host of physical effects at the bleeding edge."

Exactly. When you verify a design via an FPGA you are only essentially testing the RTL level for correctness. Once you synthesise for FPGA rather than the ASIC process, you diverge. In ASIC synthesis I have a lot more ability to meet timing constraints.

So given that FPGA validation only proves the RTL is working, ASIC projects don't focus on FPGA. We know we have to get back annotated gate level simulation test suite passing. This is a major milestone for any SoC project. So planning backwards from that point, we focus on building simulation testbenches that can work on both gate level and RTL.

I am not saying FPGAs are useless but they are not a major part of SoC work for a reason. Gate level simulation is a crucial part of the SoC design flow. All back end work is.


Let me try to summarize part of this: When you're building an ASIC, you have to care about the design at the transistor level because you're going for maximum density, maximum speed, high volume, and economies of scale. When you're building an FPGA, you are only allowed to care about the gates, which is one abstraction level higher than transistors.

In an FPGA, you cannot control individual transistors. (FPGAs build "gates" from transistors in a fundamentally different way than ASICs do, because the gates have to be reprogrammable.) And that's okay because FPGA designs aren't about the highest possible speed, highest density, highest volumes or lowest volume cost.


There are three things here that you've intertwined.

Process -- This is the science of creating circuits on silicon wafers using lithography, etching, and doping. There is a large body of knowledge around the physics involved here. Materials science and Physics and Silicon Fabrication are all good places to start.

Chip Design -- This is creating circuits which are run through a tool that can lay them out for you automatically. HDLs teach you to describe the semantics of the hardware in such a way that a tool can infer actual circuits. Generally a solid understanding of digital logic design is a pre-requisite, and then you can learn the more intimate details of timing closure, or floor planning, signal propagation and tradeoffs of density and speed.

IP -- Clearly all of the intellectual property law is a huge body but most of the IP around chips is patent law (how the chips are made) and copyright law (how they are laid out).


Yes that's true, but there's actually even more like packaging, testing, etc. I took a free online course from Stanford called "nanomanufacturing" but it really was mostly about about chip manufacturing, packaging etc. Even though I worked in the semiconductor industry for 12 years (mostly bench testing preproduction ASICS) I still found it really useful. Not sure if you can still view the archives here if you sign up for an account (I can but I took the class)

https://lagunita.stanford.edu/courses/Engineering/Nano/Summe...

No substitute for learning the physics, but at least it kind of gives you some idea of what's involved. In addition to all the crazy technology involved in fabricating the chips, the packaging technology has gotten really sophisticated. It can be very confusing about what's the difference BGA, WLCSP, stacked dies, etc. Anyway the course covered a lot different types of processing with examples.


That is awesome. The link doesn't work for me but I didn't really expect it to. My first job in the Bay Area was working for Intel and about 6 months in I was offered some 'counterfeit' or grey market Intel DRAM chips. (as a microcomputer enthusiast, not as an Intel employee) I took the offer to security, who gave me the cash to buy a tube of them, which I did, and they disassembled them to figure out where in the packaging pipeline they had gone missing.

Sadly I never got to hear the full story on how they came to be but I did get a good look at the packaging pipeline that Intel used at the time. It was extensive even then with half a dozen entities providing steps in the path.


An ASIC is pretty expensive unless you've got Google money [1]. Start with an FPGA dev board [2] and probably just stick with FPGA. Hell, Amazon has an FPGA instance [3]:

[1] https://electronics.stackexchange.com/questions/7042/how-muc...

[2] https://www.sparkfun.com/products/11953

[3] https://aws.amazon.com/ec2/instance-types/f1/


You can actually get an ASIC manufactured for a few thousand dollars via CMP or Europractice. So not quite Google money. The difficulty is in paying for the software licenses you need to go from Verilog to DRC checked GDSII files (which is what you need to send to them).

In fact personally I think this is a much better route for open source hardware. Reverse engineering FPGA bitstreams impressive, but you're swimming against the tide. If we had good open source tooling for synthesis/place-and-route/DRC checking and good open source standard cell libraries (and these things exist, eg. qflow, they're just not amazing currently) then truly open source hardware could actually be a thing. Even on 10 year old fabs you'll get much better than you could do on an FPGA (you just have to get it right first time).


There's also efabless: https://efabless.com/

Europractice has prices online: http://www.europractice-ic.com/general_runschedule.php


MOSIS is also an option in the US, albeit more geared towards industry.


I'm very interested in this. I read that one small design (which I knew was very small, but not quantitatively so) cost approximately $5k per small run.

How is the cost calculated? I presume size of final wafer (ie, number of chips produced) at least; does transistor count per chip influence anything too?

Finally, is it possible to produce and maintain a fully open-source design that's the chip-fab equivalent of the book publishing industry's "camera-ready copy"? I get the idea that this is specifically where things aren't 100% yet, but, using entirely open tools, can you make something that is usable?


The cost depends on the technology. More modern processes are dramatically more expensive (don't even think about something like 28nm). If you're looking at something ancient like 130nm, 0.35um, etc then it's going to be something like $1-2k per mm^2. Normally there's a minimum cost of a few mm^2. Expect to get a few tens of chips back. Transistor count has no effect, they couldn't care less what you put inside your part of the wafer, so long as it follows the design rules (for example minimum % coverage on each layer to prevent it collapsing vertically), but obviously if you need more transistors you might need more area.

Yes it's 100% possible to do an open-source design, qflow has been used to make sub-circuits of ASICs, but it's going to be extremely difficult. There are lots of things missing which you'd have to take from the fabs PDK or design yourself, for example open source I/O pads (sounds boring, but actually lots of work with ESD, etc). Combined with huge missing feature-sets in the open source tools, like extraction of designs back to SPICE circuits with parasitics and complete DRC checking, you're not going to have a fun time.


Hmm, I see. Thanks for this info, and particularly the bit about the pricing.


The equivalent term is "GDSII", and I think it is possible, but I'm not aware of an example. Definitely not for beginners.

"Full open source" encounters some odd corner cases in EDA because it's not software. Eg Do you use the foundry's library for things like IO pads?


Transistor count impacts the size of the die which impacts yield (how many good die out of total.)


Regarding the ASIC part: There are several good courses online that explain the whole process from hardware description in a hardware description language to GDSII file, which you could send to a foundry in some detail. See for example this course https://web.csl.cornell.edu/courses/ece5745/, there are also very good courses available from Berkeley and MIT. What is usually missing is the more gory details of the backend flow, which can become very involved and complicated depending on your design and process.

Unfortunately someone who has no access to the EDA tools by cadence / synopsis or standard library files from foundries, cannot really follow along all that far, you are limited to working at the RTL level.

There are several good open source RTL simulators available, I have personally used mostly verilator, which supports (almost) all synthesizable constructs of system verilog and has performance close to the best commercial simulators like vcs. It compiles your design to C++ code which you can then wrap in any way you like.

You should also check out https://chisel.eecs.berkeley.edu/, which is a hardware description language embedded in scala, the nice thing about it is that it has a relatively large number of high quality open source examples, designs (https://github.com/freechipsproject/rocket-chip) and a library of standard components available, something which can't really be said of verilog / vhdl unfortunately. As an added bonus you can actually use IntelliJ as an IDE, which blows any of the commercial IDEs available for system verilog or vhdl out of the water.

Another thing I can recommend is to get yourself a cheap FPGA board, some of them are programmable purely with open source tools, see http://www.clifford.at/icestorm/. Alternatively the Arty Devkit comes with a license for the Xilinx Vivado toolchain.


I wouldn't start with Chisel as there's a lack of good documentation online. When I used it for a Berkeley class, sometimes you would feel like you hit a wall. Verilog or SystemVerilog will have much more in the way of stack overflow type of documentation.


> And also at least a basic understanding of how IP is managed in chip industry - like "IP core" is a term that I frequently hear but due to the myriad interpretations available online, I don't really understand what an "IP core" really means.

I'm by no means a veteran but my understanding is that "IP core" refers to a design you buy from someone else. Say you want a video encoder on your smart fridge SoC. You can either spend a whole lot of time, manpower and money developing one yourself or you can license the design from someone else who already has one and just dump it in.

You'd only do this when you want to integrate the design into your own (likely mass manufactured) chip. You can also often buy a packaged chip that serves the same function for much less but doing that is a tradeoff. You can do it at very low volume and cost but you potentially lose a bunch of efficiency in terms of space and power.


That is my understanding as well. In a commercial setting, reinventing the wheel is economically a bad idea. For the company licensing the IP core, the licence revenues are another form of return on investment for the design effort. Companies, like ARM, are "fab-less", i.e. they create IP cores and license them to semi manufacturers.


I've gotten curious about FPGAs myself of late, particularly with video capture. A hopefully-on-topic question of my own, if I may:

I've seen that some FPGA boards have HDMI transceivers that will decode TMDS and get the frame data into the FPGA somehow. That got me thinking about various possibilities.

- I want to build a video capture device that will a) accept a number of TMDS (HDMI, DVI, DisplayPort) and VGA signals (say, 8 or 10 or so inputs, 4-5 of each), simultaneously decode all of them to their own framebuffers, and then let me pick the framebuffer to show on a single HDMI output. This would let me build a video switcher that could flip between channels with a) no delay and b) no annoying resyncs and c) because everything's on independent framebuffers I can compensate for resolution differences (eg, a 1280x1024 input on the 1920x1080 output) via eg centering.

- In addition to the above, I also want to build something that can actually _capture_ from the inputs. It's kind of obvious that the only way to be able to do this is via recording to a circular window in some onboard DDR3 or DDR4 (256GB would hold 76 seconds of 4K @ 144fps). My problem is actually _dumping_/saving the data in a fast way so I can capture more input.

I can see two ways to build this

1) a dedicated FPGA board with onboard DDR3, a bunch of SATA controllers and something that implements the equivalent of RAID striping so I can parallelize my disk writes across 5 or 10 SSDs and dump fast enough.

2) A series of FPGA cards, each which handles say 2 or 3 inputs, and which uses PCI bus mastering to write directly into a host system's RAM. That solves the storage problem, and would probably simplify each card. I'd need a fairly beefy base system, though; 4K 144fps is 26GB/s, which is uncomfortably close to PCI-e 3.0 x16's limit of 32GB/s.

I'll admit that this mostly falls under "would be really really awesome to have"; I don't have a commercial purpose for this yet, just to clarify that. That said, my inspiration is capturing pixel-perfect, perfectly-timed output from video cards to identify display and rendering glitches (particularly chronologically-bound stuff, like dropped frames) in software design, so there's probably a market for something like this in UX research somewhere...


This is a very difficult project.

No FPGAs have anything to decode TMDS. What they have is transceivers that will take the multi-gigabit serial signal and give it to you 32-bits or 64-bits at a time. Then it's your job to handle things like bit alignment, 8b10b decoding, etc. You need 6MB of RAM to hold one frame at 1080p, and to do this you'll need to hold one for each input. That requires external RAM (there's no enough on the FPGA), which means you'd need something like 30Gbit/sec of RAM bandwidth, which means you need DDR2 RAM before you've even stored anything (DDR3 is really awkward, I'd stick to DDR2 or LPDDR2, DDR4 is going to be impossible).

I'm not even going to comment on the feasibility of implementing SATA.


I see... ouch.

Another comment mentioned the HDMI2USB project, and the main FPGA target of that project uses DDR3 FWIW. I get the impression that managing RAM is just really really really _unbelievably_ hard; not impossible, just pɹɐɥ ʎllɐǝɹ ʎllɐǝɹ.


I don't think it's that bad. Board design is, but if you are working with some standard board, say from Opal Kelly (no affiliation), then you don't have to worry about that.

But what about the DRM with HDMI? I have no idea how it works, but I assumed the signal was encrypted.


My context would be recording desktop-level glitches, so HDCP wouldn't be an issue.

This device would be a recorder though, so I'd likely never legitimately get HDCP keys. I can't see myself needing any though.


HDCP (the DRM tech for HDMI) was not very well designed, and the master keys were cracked some years ago.


Which is probably fine if you're just doing it as a hobby project, but could land you in a legal pickle if you try to do anything commercial with it.


Yes, I imagine you'd run afoul of the DMCA or similar if you started trying to make money from such a thing.


Ah, TIL. I thought HDCP was still a wall. Security by Committee™ FTW!


Perhaps related, if not quite as ambitious:

https://hdmi2usb.tv/home/ - Open video capture hardware + firmware

I would also recommend this project if someone doesn't have an idea of their own, and/or doesn't have any fpga experience. So if someone is looking for people with both of those doing work in the open who might be willing to mentor them, finding a way to help this project is a great way to get started.


This is quite interesting, thanks!

It uses an FPGA with DDR3 incidentally, huh.

Thanks very much for this, this is very close to what I'm trying to get at. I agree that figuring out how to help out on this project would be very constructive, I've filed it away.


Checkout https://git.elphel.com/Elphel/x393, it is a design for an FPGA for a camera, it contains roughly the components you are talking about, on board DDR3, sata output. The code is released under GPL3, so you won't be able to use it for something commercial, but it might be a good starting point to give you an idea how such a design would look like.


> The code is released under GPL3, so you won't be able to use it for something commercial […].

Actually, you can do this just fine. There are no 'non-commercial' clauses or anything like that in the GPL. You just have to release your own sources.


The capture thing appears to exist: https://www.blackmagicdesign.com/products/hyperdeckshuttle/ - presumably it applies lossless compression. Lossless encoding within the h264 container should be possible.


That is interesting.

Hmm, I just found a teardown video: https://www.youtube.com/watch?v=TLs2CNhbKAs

I spy a Xilinx Spartan 6 at 2:16!


That Blackmagic stuff is really awesome. Expensive as ..., but worth every penny.


Pardon my ignorance, but I thought Blackmagic's claim to fame was always being on the low end of the pricing for their tech.


For what its worth a lot time working towards Carnegie Mellon's undergraduate (and graduate) degree in computer engineering revolved around verilog and FPGAs and ASICs. After learning all sorts of principles and design skills we learned verilog so we can actually build decent size projects and simulate them. Then we would build real world projects with an FPGA and then the advances classes has us designing and simulating ASICs. Then the really advanced classes has us studying manufacturing ASICs.


I wanted to learn about Verilog development and to get a better understanding of what's happening on chips. To that end I bought a MiST FPGA-based computer: https://github.com/mist-devel/mist-board/wiki

It's Altera Cyclone III-based. The MiST wiki says the free "web edition" of the Altera "Quartus II" development environment is sufficient to develop for the unit (albeit I haven't actually gotten around to doing anything with it yet).

I can't say how the MiST board stacks-up to dev boards from FPGA manufacturers. I may be going about this the most wrong way possible, but here was my rationale: I was attracted to MiST because tutorials were available for it (https://github.com/mist-devel/mist-board/tree/master/tutoria...), and because the device could be usable as a retro-computing platform if it ended up being unusable for me for anything else. (Chalk that up to rationalization of the purchase, I guess.)


First of all you must understand logic gates, flip-flops (D-FF, T-FF and so on) and multiplexers. All of them are built on logic gates. To verify that you really understand the concept, try to implement a digital clock. (This one: https://sapling-inc.com/wp-content/gallery/digital-clocks-fo...)

Logic gates are essential of learning digital circuits. After you understand logic gates, you can use it to build many things that are really related to the application. There are many tools to verify the logic gates works as designed.

Then based on the project requirement ($$, time, performance, ...), you can choose to use FPGA or ASIC to implement logic gates. FPGA use array of logic gates while ASIC uses CMOS to implement the logic gates. FPGA is easier to learn and much cheaper. You can buy some development board which costs only several hundred dollars. While ASIC needs much domain knowledge and people involved. ASIC needs you to understand the electronics in order to build something that is useful. You need to understand how the CMOS are implemented (= how semiconductor becomes conductive), how the resistance and capacitance affect the performance, the number of wafer layout which affects the cable layout and more. And don't forget manufacturing can introduce defeats which cause the IC to malfunction in unexpected ways. Each step in ASIC needs a specialist for them


For 5 seconds I was like "but the clock is the analog part of the circuit, this does not make any sense". Then I clicked the link.


You may find the book Contemporary Logic Design by Katz and Borriello to be interesting. Its what we used in my college digital logic class.

For my computer architecture class (i.e. the class where we learn how to design basic CPUs -- and ultimately implemented one in Verilog), we used Computer Organization and Design by Patterson and Hennessy. That might also be of interest.


You can start by buying a Papillon board and going through the free range VHDL book. That's the high level.

When you want to understand CMOS and integrated circuits, you need some electronics experimenter kit, and a lot of practice in ohms law. Then read up on multiple gate transistors and (here my experience stops) lithography and small scale challenges (tunneling loss is a thing, I suppose?)

Of course, "ip core" can mean different things, might be some Verilog source, might be some netlists, might be a hard macro for a particular process. You really need to work with it to get the specifics. (Subscribing to EETimes, going to trade shows, and otherwise keeping up might help)

But at the end of the day, you're asking "how can I become and experienced ASIC engineer," and the truth is that it takes time, education, and dedication.


The board you're thinking of is the Papilio. The Papillon is a dog breed. :)


I studied an MEng in Electronic Systems Engineering, and really enjoyed the courses in IC design.

However, I couldn't find a chip-design job in a country other than the US or UK that wasn't related to military applications.

Now I work in Taiwan, and I see the chips being made! But my work is related to control systems for the testing equipment, which is software instead of hardware design.

I got a Virtex-II FPGA board from a recycling bin, and I wanted to find a good personal project for it. Even now, I'm at a loss for ideas. I can do everything I need with a Raspberry Pi already.

Please can someone suggest some good projects I could only do with an FPGA?


You could see what the folks at Hackaday.io are doing.

https://hackaday.io/list/3746-programmable-logic-projects


Opencores.org has a very large collection of opensource IP cores.


http://www.clifford.at/icestorm/

open source FPGA workflow. ASICs is more tough


I'd start with learning a hardware description language and describing some hardware. Get started with Verilog itself. I'm a fan of the [Embedded Micro tutorials](https://embeddedmicro.com/tutorials/mojo) -- see the links under Verilog Tutorials on the left (they're also building their own HDL, which unless you own a Mojo board isn't likely of interest). Install Icarus Verilog and run through the tutorials making sure you can build things that compile. Once you get to test benches, install Gtkwave and look at how your hardware behaves over time.

You can think of "IP cores" as bundled up (often encrypted/obfuscated) chunks of Verilog or VHDL that you can license/purchase. Modern tools for FPGAs and ASICs allow integrating these (often visually) by tying wires together -- in practice you can typically also just write some Verilog to do this (this will be obvious if you play around with an HDL enough to get to modular design).

Just writing and simulating some Verilog doesn't really give you an appreciation for hardware, though, particularly as Verilog can be not-particularly-neatly divided into things that can be synthesized and things that can't, which means it's possible to write Verilog that (seems to) simulate just fine but gets optimized away into nothing when you try to put it on an FPGA (usually because you got some reset or clocking condition wrong, in my experience). For this I recommend buying an FPGA board and playing with it. There are several cheap options out there -- I'm a fan of the [Arty](http://store.digilentinc.com/arty-a7-artix-7-fpga-developmen...) series from Digilent. These will let you play with non-trivial designs (including small processors), and they've got lots of peripherals, roughly Arduino-style.

If you get that far, you'll have discovered that's a lot of tooling, and the tooling has a lot of options, and there's a lot that it does during synthesis and implementation that's not at all obvious. Googling around for each of the phases in the log file helps a lot here, but given what your stated interest is, you might be interested in the [VLSI: Logic to Layout](https://www.coursera.org/learn/vlsi-cad-logic) course series on Coursera. This talks about all of the logic analysis/optimization those tools are doing, and then in the second course discusses how that translates into laying out actual hardware.

Once you've covered that ground it becomes a lot easier to talk about FPGAs versus ASICs and what does/doesn't apply to each of them (FPGAs are more like EEPROM arrays than gate arrays, and for standard-cell approaches, ASICs look suspiciously like typesetting with gates you'd recognize from an undergrad intro-ECE class and then figuring out how to wire all of the right inputs to all of the right outputs).

Worth noting: getting into ASICs as a hobby is prohibitively expensive. The tooling that most foundries require starts in the tens-of-thousands-per-seat range and goes up from there (although if anyone knows a fab that will accept netlists generated by qflow I'd love to find out about it). An actual prototype ASIC run once you've gotten to packaging, etc. will be in the thousands to tens of thousands at large (>120nm) process sizes.


A quick warning about Embedded Micro: do not use the "Mojo IDE". It's extremely barebones, and lacks many important features -- in particular, it has no support for simulation workflows or timing analysis.


If i correctly understand what you are saying, it cold pe possible to make a custom small chip for doing some crypto capable of a little more than what those smartcards offer for under 10k$? That would be awesome, from a trust perspective, at least if you could realistically compare the chip you get back with what you know to expect, using an electron microscope.


Not for an ASIC without spending a LOT on tooling, and really $10k is awfully optimistic even if you had all of that tooling (I probably should've just said tens of thousands).

For <100k, yes, you can absolutely do a small run in that range.

Honestly, you might be better off just buying functional ICs (multi-gate chips, flip flops, shift registers, muxes, etc.) and making a PCB, though. Most crypto stuff is small enough that you can do a slow/iterative solution in fairly small gate counts plus a little SRAM.


If you do that, why wouldn't you use a FPGA or just a fast CPU? Microcontrollers and CPUs are blending in performance, and are cheap enough to plop on a board and call it done for many applications.


Sure, but if really want to avoid trusting trust (and you're of the mind to build your own hardware), FPGAs and µcs offer a lot of room for snooping.

Given the GPs suggested use, it seemed trusting trust was not on the table.

Certainly even a tiny FPGA can fit pretty naïve versions of common crypto primitives, as can any modern micro-controller. Assuming you only need to do a handful of ops for whatever you're looking to assert/verify, that is by far simpler than building a gate-level representation :)


I was thinking about a chip with only sram for secret storage that could be bundled into a ID-1 sized card with some small energy storage for the sram (there are affordable .5mm LiPo Cells that fit inside such a card), and then use the card to fit some display capable of giving some little data out, as well as a touch matrix,possibly by just using a style similar to carbon-contacts on cheap rubber membrane keyboards, but gold plated like the smartcard interface. But it seems like you can't afford to store one decompressed ed25519 or dare rsa, so the idea is moot by virtue of requiring sub-100nm technology to fit at least some sram.


In the usecase a lack of accessibility of sram content through probing is very important. The benefit of this over some μC or fpga is that you can account for every spec on a scanning tunnel electron microscope. And the high resolution xray you made before while the chip was still in it's package. Which you can compare with all the chips you will use and ship. It is sadly easy to backdoor with just a single hidden gate.


I actually want to understand the chip manufacturing process - design, prototyping (using FPGAs, etc), baking process in foundries. And also at least a basic understanding of how IP is managed in chip industry - like "IP core" is a term that I frequently hear but due to the myriad interpretations available online, I don't really understand what an "IP core" really means. Hoping to get useful advice from veterans in the chip industry.

I was about to write a thorough answer but jsolson's very good answer mostly covered what I had to say. I'll add my 2 cents anyway

- To understand the manufacturing process you need to know the physics and manufacturing technology. That's very low level stuff that takes years to master. There are books and scientific papers on the subject but it's a game very few players can play (ie. big manufacturers like intel and TSMC, maybe top universities/research centres). You can try some introductory books like Introduction to Microelectronic Fabrication [1] and Digital Integrated Circuits [2] by Rabaey if you're curious

- You can design ASICs without being an "expert" on the low level stuff, but the tools are very expensive (unless you're a student and your college has access to those tools). You need to learn VHDL/Verilog, definitely need the right tools (which, I'll say it again, are too expensive for hobbyists) and extra money to spend for manufacturing.

- FPGAs are different. No need to know the physics and you don't have to bother with expensive tools and foundries, the chip is already manufactured. My advice is to

(a) get an FPGA board. Digilent boards are a good choice as jsolson said, but there are other/cheaper solutions as well [3][4]. You'll still need software tools but (for these boards) they are free

(b) learn VHDL and/or Verilog. Plenty of books and online resources. A good book I recommend is RTL Hardware Design Using VHDL [5]

(c) I assume you know some basic principles of digital design (gates, logic maps, etc.). If not you'll need to learn that first. A book is the best option here, I highly recommend Digital Design [6] by Morris Mano

[1] https://www.amazon.com/Introduction-Microelectronic-Fabricat...

[2] https://www.amazon.com/Digital-Integrated-Circuits-2nd-Rabae...

[3] https://www.scarabhardware.com/minispartan6/

[4] http://saanlima.com/store/index.php?route=product/product&pa...

[5] http://eu.wiley.com/WileyCDA/WileyTitle/productCd-0471720925...

[6] https://www.amazon.com/Digital-Design-3rd-Morris-Mano/dp/013...


The website http://semiengineering.com/category-main-page-sld/ is also has lots of interesting articles about semiconductor industry.


You can start at the beginning of VLSI revolution and read from the horses mouth 'Introduction to VLSI Systems'. If you are still serious about it get into MITs EECS.

Btw you ask about 4 almost totally separate areas(RTL UVM TB etc), only managers/execs/veterans/architects know the whole process from raw silicon to packaging.


Shameless friend-promotion: http://tinyfpga.com

I spoke to the creator today and he's planning a tutorial/example IP series - probably open to suggestions if there's anything you're particularly interested in.


Check out EDA playground. You can run verilog on their web interface and can bring up waveforms from simulations.

https://www.youtube.com/user/edaplayground


You just described a few different sub-fields of computer engineering:

1. Processes, which involves lots of materials science, chemistry, and low-level physics. This involves the manufacturing process, as well as the low-level work of designing individual transistors. This is a huge field.

2. Electrical circuits. These engineers use specifications given to you by the foundry (transistor sizes, electrical conductance, etc.) and using them to create circuit schematics and physically laying out the chip in CAD. Once you finish, you send the CAD file to the group from #1 to be manufactured. Modern digital designs have so many transistors that they have to be laid out algorithmically, so engineers spend lots of time creating layout algorithms (called VLSI).

3. Digital design. This encompasses writing SystemVerilog/VHDL to specify registers, ALUs, memory, pipelining etc. and simulating it to make sure it is correct. They turn the dumb circuit elements into smart machines.

It's worth noting that each of the groups primarily deals with the others through abstractions (Group 1 sends a list of specifications to group 2, Group 3 is given a maximum chip area / clock frequency by group 2), so it is possible to learn them fairly independently. Even professionals tend to have pretty shallow knowledges of the other steps of the process since the field is so huge.

I'm not super experienced with process design, so I'll defer to others in this thread for learning tips.

To get started in #2, the definitive book is the Art of Electronics by Horowitz & Hill. Can't recommend it enough, and most EEs have a copy on their desk. It's also a great beginner's book. You can learn a lot by experimenting with discrete components, and a decent home lab setup will cost you $100. Sparkfun/Adafruit are also great resources. For VLSI, I'd recommend this coursera course: https://www.coursera.org/learn/vlsi-cad-logic

To learn #3, the best way is to get a FPGA and start implementing increasingly complicated designs, e.g. basic logic gates --> counters --> hardware implementation of arcade games. This one from Adafruit is good to start: https://www.adafruit.com/product/451?gclid=EAIaIQobChMIhKPax..., though if you want to make games you'll need to pick up one with a VGA port.

Silicon design & manufacturing is super complicated, and I still think that it's pretty amazing that we're able to pull it off. Good luck with learning!

(Source: TA'd a verilog class in college, now work as an electrical engineer)


There's also the field of "Computer Architecture", which is a level above the ones you've described.


Get yourself an FPGA devkit and start making little hardware bits


What's the best way for a novice to determine what their needs are? One thing that's been a barrier to entry for me has been fear of vendor lock-in from point of sale with regards to upgrading in the future; ie: purchasing a starter kit by company X but once determining that only company Y supports the feature set. Where there would then require a non-trivial amount devtime would be sunk in re-tooling your code both across hardware and design environments. I was originally really excited to see AWS hosting FPGA instances, but a friend told me that they were charging a heavy premium and only had a limited number of manufacturers.


If you don't know what you are doing, trying to pick an optimal path is not likely to be a useful exercise. I'd start with anything that looked approachable & free/cheap, try to learn a few things, craft a few questions, and then reassess. There's no reason to fear the unknown. Jump in and make it more known


Huh? There's loads of cheap dev kits for a few 10's of bucks on eBay.

Example: http://www.ebay.com/itm/ALTERA-FPGA-Cyslonell-EP2C5T144-Mini...

This uses the EP2C5T144. Some basic getting started info here: http://www.leonheller.com/FPGA/FPGA.html [Leon Heller]

You can run a free copy of Quartus and use it to run Verilog / VHDL cpus, or anything else you want to do.

Same applies to the Xilinx FPGAs.

A really cool example project: http://excamera.com/sphinx/fpga-j1.html


There are basically two companies in this business, Altera (Intel) and Xilinx.

I would not worry about vendor lock-in for now -- there are some quite affordable dev boards (like the Arty (http://store.digilentinc.com/arty-a7-artix-7-fpga-developmen...) I've mentioned elsewhere), and no matter what you pick there's a ton of tooling. The concepts from the tools will translate between vendors, though, even if the commands and exact flows change.


To add to this:

Xcell is the name of Xilinx's self-published journal(s), and it's free. Here's the Xcell portal:

http://www.xilinx.com/about/xcell-publications.html

Here's a short article published in Xcell by Niklaus Wirth after he designed and prototyped a RISC CPU on a Xilinx FPGA:

https://issuu.com/xcelljournal/docs/xcell_journal_issue_91/3...

And finally, it's worth noting that the most recent post on the Xcell blog is one titled "Found! A great introduction to FPGAs", which was written just a couple days ago and is a glowing recommendation for the textbook "Digital System Design with FPGA: Implementation Using Verilog and VHDL".

https://forums.xilinx.com/t5/Xcell-Daily-Blog/Found-A-great-...


If you're worried about vendor lock-in, FPGA/ASIC design is not for you. The design tool market is horrible.

That said, for a novice there are no significant differences in features between Altera and Xilinx.

The place where you might see problems is mostly in simulation, where the big three tool vendors support different language features in SystemVerilog and VHDL-2008. Again, this is not likely to be a problem for a hobbyist/novice.

GHDL has good support for VHDL-2008, I don't know how good Icarus supports the corresponding SystemVerilog.


>The design tool market is horrible

I wouldn't say it's horrible, it's just not open source (mainly because it's mostly a high-end market). You can easily get by with the free versions of the implementation/simulation tools if you're a hobbyist


It's not really a market, though. If you have decided to use Xilinx FPGAs, you're going to use Xilinx tooling. Period.

For someone coming from software programming, that seems very strange. But you get over it.


Lattice is the only vendor with an open-source toolchain: http://www.clifford.at/icestorm/


What I've started doing is looking at the ice40 line of FGPA's that are supported under the Ice Storm project. It's not a perfect way to avoid that kind of lock in but it seems like a decent way to start without having to worry too much about that. That said i'm a total beginner and someone else might have better advice.

http://www.clifford.at/icestorm/


Just toss a coin and pick Xilinx or Intel (Altera). If you are working for a company then you may have better access to a Sales Engineer from one or the other which could also influence your choice.


Edit: Thanks a lot for the info everyone.


  This might help you:
  MOSIS Integrated Circuit Fabrication Service 
https://www.mosis.com


Okay, okay, hang on now, how a chip is manufactured and the design flow are really two different things.

If you could tell us your background it might be helpful to get started.


This is a huge topic. You can spend your entire career in just each part of your question - fabrication in a foundry, FPGA synthesis, HDL design, ASIC place and route, etc..

I've actually done all-of-the-above, from making SOI wafers to analog circuit design for CMOS image sensors to satellite network simulation in FPGAs to supercomputer architecture and design to XBox GPU place-and-route.

It will honestly take you at least a few years to be able to understand all of this, and I can't even begin to tell you where to begin.

My track started with semiconductor fabrication processing in college - lots of chemistry, lithography, physics, design-of-experiments, etc.. I guess that's as good a start as any. But before that I did get into computer architecture in high-school, so that game me some reference goals.

What are you ultimately trying to do? Get a job at a fab? That's a lot of chemistry and science. Do you want to design state-of-the-art chips, as your IP-core question hints at? That's largely an EE degree. Do you want to build a cheap kickstarter product, as your FPGA question suggest? That's EE and Computer Engineering as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: