Hacker News new | comments | ask | show | jobs | submit login

In Nov 2018, Google engineers who designed the TPU gave a presentation about the use of open-source Chisel for designing the ASIC, https://youtube.com/watch?v=x85342Cny8c

From https://techcrunch.com/2018/07/25/google-is-making-a-fast-sp...

> Google will have the cloud TPU ... to handle training models for various machine learning-driven tasks, and then run the inference from that model on a specialized chip that runs a lighter version of TensorFlow that doesn’t consume as much power ... dramatically reduce the footprint required in a device that’s actually capturing the data ... Google will be releasing the chip on a kind of modular board not so dissimilar to the Raspberry Pi ... it’ll help entice developers who are already working with TensorFlow as their primary machine learning framework with the idea of a chip that’ll run those models even faster and more efficiently.

If you're interested in playing with Chisel, the "Chisel Bootcamp" is now hosted on Binder meaning you can run through a fair amount of learning content in a browser [1,2].

As a longer, elaborating point: Chisel is much closer to the LLVM compiler infrastructure project than a new hardware description language. Chisel is a front end targeting the FIRRTL circuit IR. There's a FIRRTL compiler that optimizes the IR with built-in and user-added transforms. A Verilog emitter then takes "lowered" FIRRTL and emits Verilog.

Consequently, Chisel is the tip of the iceberg on top of which the Edge TPU was built. The speakers in the video mention this explicitly when explaining the "Chisel Learning Curve" slide and doing automated CSR insertion.

As a further elaboration, Chisel is pedantically not High Level Synthesis (HLS). You write parameterized circuit generators not an algorithm that is optimized to Verilog.

[1] https://mybinder.org/v2/gh/freechipsproject/chisel-bootcamp/...

[2] https://github.com/freechipsproject/chisel-bootcamp

This was an amazing sunday night rabbit hole to go down - thanks!

So my guess is that Chisel is one of the many responses to the two horrors that are VHDL and Verliog.

Unfortunately, Chisel is built on Scala, and I have no interest in learning Scala. Though I'm intrigued by the claim of using generators and not instances, and would be interested in a white paper that explains it in PL-agnostic terms (PL: programming language).

Also have on my to-do list MyHDL [1], a Python solution to the same problem. (has anyone tried it and found to be better than VHDL/Verilog?)

[1] http://www.myhdl.org/

> Unfortunately, Chisel is built on Scala, and I have no interest in learning Scala.

That's a strange reason for not wanting to reap the benefits of Chisel. Care to explain your rationale?

There is another compile-to-HDL "language" called SpinalHDL[1], so I would actually argue that Scala's metaprogramming features seem to be a good fit for this usecase.

[1] https://github.com/SpinalHDL/SpinalHDL

Rust has extensive metaprogramming features as well, so hopefully we'll be able to build a comparable framework starting from that.

I love Rust just as much as the next guy, but not everything needs to be rewritten in Rust...

I get the impression that people who talk about the "horrors" that are VHDL and Verilog for hardware design are software developers who have little to no knowledge about hardware design processes.

There are reasons why VHDL/Verilog are still in use in the industry and why high-level synthesis hasn't taken off.

VHDL/Verilog for hardware design is not broken. I won't claim that there isn't space for improvement (because there is) but there isn't anything fundamentally broken in them. They are fit for the purpose and they fulfill all of the needs we have.

What could be massively improved is actually the functional verification languages we use, SystemVerilog for verification is in serious need of an overhaul.

OK. I'll bite. I only have experience with verilog, but it's basically uncomfortable to work with in the sense that there are absolutely no developer ergonomics. We're well into the 21st century and you'd think that our HDLs would learn from everything that the software world has learned.

1) the syntax is very finicky (slighttly more so than C, I'd say). Most software languages (thanks to more experience with parsers and compilers) have moved on from things like requiring semicolons, verilog has not.

2) writing tests is awful. Testbenches are crazy confusing. Much better would be some sort of unit testing system that does a better job of segregating of what constitutes "testing code" versus the "language of the gates". You would have a hard time doing something like, say, property testing using verilog.

3) there isn't a consistent build/import story with verilog. Once worked with an engineer that literally used perl as a verilog metaprogramming language. His codebase had a hard-to-find perl frankenbug which sometimes inserted about 10k lines of nonsense (which somehow still assembled a correct netlist!) but caused gate timings to severely miss and the footprint to be bloated. It took the other hardware developers one week to track down the error.

None of these things have anything to do with the fundamental difference between software and hardware development.

For chisel: at least to some degree, you can get some developer ergonomics from the Scala ecosystem, and do most of your unit, functional, in integration testing outside of verilog, in chisel, and last-minute autogenerate verilog and do a second round of testing to make sure chisel did everything right. It's the same reason why people do things like "use elm to develop frontend, compiling down to javascript" and it's a perfectly valid strategy.

> 2) writing tests is awful. Testbenches are crazy confusing. Much better would be some sort of unit testing system that does a better job of segregating of what constitutes "testing code" versus the "language of the gates". You would have a hard time doing something like, say, property testing using verilog.

SystemVerilog makes this distinction between RTL (language of the gates) and verification environment code (wrt. testing, they are different things in my experience) very clearly. SystemVerilog inherits much of what people dislike about Verilog, but it makes writing large verification environments much easier. Again, not without lots of potential pain points, but you can do an awful lot that way.

On a slight aside - it worries me (though perhaps unreasonably) about the different approaches to functional verification which come from the software world v.s. the hardware world.

Software verification seems to (generally) be a much more continious affair, while for hardware, there is an extremely intense period of verification before the product is delivered to a customer (as IP) or physically manufactured. This arises because fixing software bugs is cheap by comparison to fixing hardware (again, please accept my generalising!).

It makes me shiver a little to hear people applying software "testing" strategies and terms to verifying actual hardware. I don't know if this is reflected by their actual practice ofcourse. There is a lot of potential for the hardware community to make use of so many software development practices in their verification environments (Big systemverilog testbenches are giant class hierarchies which are far more akin to straight up software), but I'm yet to be convinced about hardware itself. The development constraints are so different, and the possiblity for continuous development is hindered by the hard cut off point (manufacture).

I am a software engineer who's been involved in the tapeout of a few ASICs (although none of the TPUs). Particularly when you plan to build a series of chips, the continuous approach taken by software is massively preferable. X v2 does what X v1 did, plus some additional things, and with all of the errata fixed. Also, you find the errata in X v1 after tapeout but before your driver team does, saving them an enormous amount of work trying to track down a driver bug that's actually a HW bug (maybe even one with a simple workaround).

> Particularly when you plan to build a series of chips, the continuous approach taken by software is massively preferable.

For sure. I think continuous integration and cataloging of things like coverage collection is something hardware development really benefits from.

The things that hardware development can learn best from the software world are (in my opinion) mainly down to developing and maintaining verification environments, because they are (mostly) just big software projects. The constrained random variety are anyway.

I really don't get why some people get so much hyped up on typing semicolons.

Maybe we should write in our native tongues without any kind of punctuation.

When we are talking hardware design languages, semi colons or not seem pretty damn far down the list of things that actually matter.

This feels like as shallow of a dismissal as "lisp uses too many parens"

Lisp does use too many parens. There's a gunning fog associated with debugging lisp, and that's one reason why I don't code in it even though professionally I have my choice in languages and scheme was one of the first I learned.

"I really don't get why some people would want sub-10-second completion of their unit tests. Why not just wait a 30s to a minute to test everything?"

I really don't get what typing semicolons has to do with unit tests.

And yet, you typed the punctuation marks in your comment even though all of us would understand you without them.

I didn't type out the word "second"

1. Verilog requires semicolons almost everywhere. Can you point to a specific example?

2. Are we talking about Verilog or SystemVerilog? Verilog is not suitable for functional verification, people usually use SystemVerilog and methodologies like UVM for verification.

3. It's hard to tell what your college did exactly but it sounds like he over-engineer something himself.

You are talking about the advantages of Chisel for functional verification, not for hardware design, which was exactly the point I was trying to make.

RE: #3 I think Perl for Verilog metaprogramming is pretty common, but I'm not really sure, have rarely written the stuff myself. But I've seen it before.

It's an intel thing, apparently.

(Hi Pedro!)

Maybe nitpicking, but languages like Chisel and MyHDL aren't really HLS. Here there is a straight-forward mapping between the written language and the rendered result, and there should be little surprise in what logic is actually generated.

I am convinced that some specimen of this class of languages will eventually overtake verilog. One feature I'm eagerly waiting for is an equivalent of Option/Maybe types, which makes it impossible to access some signals unless they are signaled as valid by a qualifier signal.

I'm curious about what improvements you would like to see in SystemVerilog?

It doesn't look like you're open to any serious criticisms of the two Vs, but the readers of your comment and mine deserve to look at the arguments and make up their own mind. Therefore, I'm linking the pages regarding rationale for some of the recent HDLs:

- Chisel: https://github.com/freechipsproject/chisel3/wiki/Frequently-...

- MyHDL: http://www.myhdl.org/start/why.html

- SpinalHDL: https://spinalhdl.github.io/SpinalDoc/regular_hdl

Chisel isn't high level synthesis. It's not a good name to describe it when the overwhelming majority of "HLS" projects are C/C++ compilers and are completely different beasts in design and theory. Honestly, almost every experienced hardware engineer I meet who's only heard of these languages thinks this, so I partially think it's a marketing failure, but I also get the impression HW engineers think literally anything that is not Verilog is "high level" which is just simply untrue. (If I had any say in the matter, probably the only real "high level synthesis" language that isn't just a tagline for "Compile C++ to Hardware" I've experienced is BlueSpec Verilog.)

I haven't used Chisel personally, but from my experience with Clash -- it is better to think of them as structural RTLs that that have vastly better abstraction capabilities than VHDL/Verilog have. And I don't mean whatever weird things hardware designers think up when they say "abstraction" and they chuckle about software programmers (before writing a shitload of tedious verification tests or using Perl to generate finite state machines or some weird shit but That's Cool And Good because most don't know the difference between a "macro" and a "preprocessor" and no I am not venting), I mean real abstraction capabilities -- for example, parametric types alone can drastically reduce the amount of boilerplate you need for many tedious tasks, and those parametric types inline and are statically elaborated much in the same way you expect "static elaboration" of RTL modules, etc to work. Types are far more powerful than module parameters and inherently higher order, so you get lots of code reuse. In Clash, it's pretty easy to get stateful 'behavioral' looking code that is statically elaborated to structural code, using things like State monads, etc, so there's a decent range of abstraction capabilities, but the language is generally very close to structural design. The languages are overall simply more concise and let you express things more clearly for a number of reasons, and often can compare favorably (IMO) even to more behavioral models (among others, functions are closer to the unit of modularity and are vastly briefer than Verilog modules, which are just crap, etc). Alternative RTLs like MyHDL are more behavioral, in contrast.

The biggest problem with these languages are that the netlists are harder to work with, in my experience. But the actual languages and tools are mostly pretty good. And yes, they do make verification quite nice -- Clash for example can be tested easily with Haskell and all Clash programs are valid Haskell programs that you can "simulate", so you have thousands of libraries, generators, frameworks etc to use to make all of those things really nice.

(This is all completely separate from what a lot of hardware designers do, which is stitch together working IP and verify it, as you note with the verification comment. That's another big problem, arguably the much more important one, and it is larger than the particular choice of RTL in question but isn't the focus here.)

This is why the suggestion to create language/tooling neutral intermediate representation, based on FIRRTL[1].

[1] https://github.com/SymbiFlow/ideas/issues/19

if you'd like something on a to-do, I built this once, but it was a long time ago, julia has gotten a lot better, and it only does combinatorial logic (not sequential logic). I had an idea of how to do sequential logic using lambda closures in julia, but I never got around to it.


VHDL is quite nice as Ada influenced language, not sure what is horror about it.

> ...and I have no interest in learning...

Single biggest red flag when hiring engineers.

Good thing they're not asking for a job.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact