We do a fair bit of FPGA design in SpinalHDL, and have taped out several ASICs with parts of the design done in SpinalHDL at my dayjob.
In general: No, alternative HDLs don't see a lot of use, and I'd argue that we qualify as 'academia' since the ASICs are NIH funded and we tend to work with a lot of academic partners and on low-quantity R&D projects.
Having said that, every time we've deployed SpinalHDL for a commercial client they've been blown away by the results. The standard library, developer ergonomics, test capabilities, and little things like having clock domains as a part of the type system make development so much faster and less error prone that the NRE for doing it in verilog just doesn't make sense.
You get access to the entire Java and Scala ecosystem at elaboration and test time. We deploy ScalaCheck in our test harnesses to automatically generate test cases that can reduce inputs to identify edge cases. It's incredibly powerful.
I used Zig (not MicroZig, just rolled my own HAL) for the bootloader and firmware on a soft RISC-V SOC + custom peripherals recently and had somewhat mixed, though positive feelings about it.
On the positive side:
- As a 'safer c', getting things up and running was a breeze, writing code largely felt intuitive.
- The additions to C (slices/iterators, enhanced structs, arbitrarily sized integers) are excellent
- It produces fairly small firmware images (useful when stuffing a boot rom in logic/EBRAM)
- Easier (than C IMO) to get up and running with formatted IO vs retargeting libc
- Comptime is neat, and you can build some decent low-cost abstractions with it (ex: I built a comptime heavy write-through cache for key-value storage that required very little overhead and largely self-generated based on a simple struct)
- I really enjoy the use of structs for function+data organization. It maps well to hardware instances, giving you an 'object' like feeling without OOP ick.
On the negative side:
- The compiler is still a seriously moving target. Upgrading sometimes meant rather large refactors.
- Documentation is somewhat poor IMO.
- As a long time user of Nim (including on really lean embedded targets), compared to hygienic macros, comptime falls way short.
- The lack of first class interfaces/traits/typeclasses is not my favorite. The currently suggested alternatives are so un-ergonomic I'd almost call them hostile.
All-in-all, I'm excited to see where Zig ends up. After nearly 20 years writing embedded code I'm really (really really really) tired of C. The embedded systems community really needs to embrace better tools.
> After nearly 20 years writing embedded code I'm really (really really really) tired of C. The embedded systems community really needs to embrace better tools.
Hear hear, brother!
I get particularly frustrated about the second point, and I've made it my career goal to get my teams to update their tools and processes. For example, when I joined my current team, they didn't compile debug symbols and didn't know how to use a debugger on our system! Hell, in 2024 I still have colleagues who prefer to use a .dis and .map file than leverage the debug symbols... "What is this "mixed code and disassembly" display you speak of"
I know people who insist on putting register addresses (in hex) in their code rather than make a variable because "it's easier to debug by comparing to the user guide. If it was a variable, you would have to look up its value every time". It looks somewhat like this:
So, my favorite pattern in Rust for this is to use a u8, (or u16 etc)-repr Enum for register addresses and values. So, you'd do something like, assuming a direct register API vice a wrapper:
write_register(Reg::Config as u8, value)
Where `value` may be constructed from variables or, wait for it... Might be a binary literal because it's easier to compare to the datasheet if it's a one-off vice a general API. If it's a general API, it is probably handled with a config struct etc, where each field is a u8-repr enum.
Code like this should IMO always have a reference to the relevant DS table in comments, and probably an explanation of why you're setting the bits that way.
Can you share what your experience has been like with Nim on embedded targets? Both Nim and Zig are on my wishlist to try out for embedded but I'm doing C and RTOSes for the next few projects.
Ada with Ravenscar seems like another "seems like it solves a lot of common problems intelligently" but I haven't had much time to try it being a simple proof of concept.
Like any time you get off the beaten path, there are rough patches, but to paraphrase another comment in this thread "Nim is just C", so anything that felt a little awkward just meant using an `importc` pragma and doing whatever I needed to do in an environment I felt more comfortable (you can also have C code compiled with a {.compile: "foo.c"} pragma, so if a module required something be done in C, you didn't have to monkey about with the build system to include it, it 'just worked')
The biggest negative I ever hit was that early on, Nim didn't have support for `volatile`, which meant it was a non-starter for doing anything with MMIO (I ended up being the one who added volatileLoad/volatileStore to Nim's stdlib so I could use it on a Cortex-M without having to drop into C so much).
For the most part though, if you're reasonably comfortable with embedded toolchains (i.e. you understand how to write linker scripts, understand what happens between a reset and actually getting into `main()`, etc), it's not much of a hurdle to set up a simple build system to compile your Nim code to C, link appropriately, and then sort of forget about it.
It's been a while, but IIRC I also got step-through debugging working with OpenOCD by having the nim compiler generate `#line` pragmas and including debug symbols, which was pretty neat.
This was all pre ARC/ORC, so I did have to make sure to be careful not to use ref objects, but ultimately it felt pretty seamless. I still tend towards fully manually managed memory on embedded projects, but I'd be curious to give it a go.
I use Nim at work for embedded firmware development right now, and we evaluated Zig but it was a year and a half ago when we started the project, Zig just wasn’t as far along as it is now.
I’m currently in the process of writing a nice HAL/dev framework agnostic FreeRTOS binding in Nim, which maybe you’ll find useful once we can post it?
OS is not the same level of abstraction as a language.
This is an active problem in embedded. Cheap microcontrollers can have multiple cores that are not even the same architecture.
This means that just to get "blinky" running, you need to choose a (language, OS) tuple. And, given that "language" is generally "C", that means that your abstraction choices for OS are lousy.
Side question: last I checked, FreeRTOS didn't do a great job when multiple microcontrollers were involved--especially if the communication channels or synchronization were hardware-based. Has this changed?
iirc the os needs to have processor specific task switching code in assembly, Freertos must have tons of ports already. Nim supposedly has nice FFI for C. The niceness of freertos is then mostly about it's.. system call design. If it's good, then you could probably progressively rewrite freertos in nim.
Never used nim and been a while since I did any embedded.
> The lack of first class interfaces/traits/typeclasses is not my favorite. The currently suggested alternatives are so un-ergonomic I'd almost call them hostile.
The use of anytype as a sort of universal interface is my least favourite part of Zig. I’ve seen enough griping about it that I’m hopeful something happens here.
>- As a long time user of Nim (including on really lean embedded targets), compared to hygienic macros, comptime falls way short.
In what way? Comptime should be generally capable of anything macros are.
>- The lack of first class interfaces/traits/typeclasses is not my favorite. The currently suggested alternatives are so un-ergonomic I'd almost call them hostile.
Terribly difficult to implement without breaking a major language tenant of “no hidden control flow”.
But I found that once I left behind OO style of thinking, I haven’t missed this all that much. For the rare time I do generalize like this, you can literally just check at comptime that the passed in type provides the necessary decls. It’s not terribly complex or hostile (although tooling could use some work around it)
> In what way? Comptime should be generally capable of anything macros are.
I started to reply to this with 'Comptime is generally capable of doing anything that Nim's templates can accomplish (but not it's macros)', but I stopped myself because even though Zig's comptime is more akin to Nim's templates than its macros (in my opinion), Nim templates are more powerful as they allow you to embed arbitrary blocks of code to implement constructs similar to python's context managers (which I'm fairly certain you can't do with comptime).
W/R/T Macros vs Comptime, you can't create arbitrarily complex DSLs with comptime the way you can with a Nim's macros as you don't have full control over AST generation.
All that said, the power you'd get from a Macro or Template system like Nim's don't really jive with Zig's whole "no hidden control flow" thing.
>Terribly difficult to implement without breaking a major language tenant of “no hidden control flow”.
I dunno if I agree with that, even very simple rust-like traits that simply enforce that a struct implemented a given interface at compile time (static dispatch only) would go a long way without compromising obvious control flow IMO.
>But I found that once I left behind OO style of thinking, I haven’t missed this all that much. For the rare time I do generalize like this, you can literally just check at comptime that the passed in type provides the necessary decls. It’s not terribly complex or hostile (although tooling could use some work around it)
Respectfully, I don't view it as OO thinking (I think typeclasses come from SML...). Making polymorphism reasonably ergonomic goes a long way towards code reuse and (again, only my humble opinion here) would help with what some of the folks in this comment section are talking about w/r/t code re-use and generalizing a HAL layer in a consistent way without forcing users (or library authors) to write a bunch of ad-hoc code to check that functions exist on a given struct, or manually implementing dispatch tables.
I would generally agree that polymorphism is necessary, except maybe for embedded systems. You want more of a closed world system there, not an open world system that permits arbitrary extensions.
By open world, I'm thinking of abstractions like closures and interfaces which permit arbitrary extension and require dynamic dispatch. Given any fixed set of such abstractions, you can simulate in a closed world system via something like defunctionalization during whole program compilation, which is what you do on embedded systems. You can probably do a defunctionalization transformation with comptime, and that gets you better visibility on the state of the system in a way that's not possible with a truly open world system.
> All that said, the power you'd get from a Macro or Template system like Nim's don't really jive with Zig's whole "no hidden control flow" thing.
Only if you do it wrong.. It's quite simple to mandate a special character which indicates that "something's happening here", for example you could have
y = a #+ b with + being a function operating on matrixes, the # indicating that this is a function called not the regular operator.
Nim is “just C” at the end of the day, so by leveraging the —-compileOnly flag you can use Nim anywhere you can use C
Now that arc/orc are the default memory management strategy, it’s even possible to keep some of the niceties from the standard library when doing so — but that will depend on your target of course.
Even going the “no heap allocation” route is totally feasible, you sort of end up using Nim as a nicer C syntax with extra features. All of the libraries we write for work have two interfaces, one that returns (possible heap allocated) results, and one that takes a buffer pointer (as a var openArray[T] param) in instead.
> - The lack of first class interfaces/traits/typeclasses is not my favorite. The currently suggested alternatives are so un-ergonomic I'd almost call them hostile.
Personally, I don't see this as a problem. I like implementing an interface in Zig the same way I implement one in C -- I use a function pointer that takes a packet that defines the operation to execute with that packet data. Zig's exhaustive switch statements make these even better to work with.
Here is some pseudocode to illustrate what I mean:
This is a nice breakdown, however it is leaving out one interesting piece: Active slew rate controllers.
In some cases where you need to exceed the typically allowed bus capacitance either due to a high number of attached devices, or over a long cable run (it happens...it sucks, but it happens), you can use a part like the LTC4311 which, rather than using resistors to passively pull the bus lines to a resting high state, detects the direction changes and actively assists in pulling the lines to their intended states.
I wish software packages had data sheets! It would be great to have a concise/standardized format with bullet points and a "typical applications" section on github landing pages.
Analog Devices and Linear Technology (now owned by Analog Devices) have the cleanest looking datasheets. Texas Instruments is ok but not great. Standard Microcircuit Drawings, the kind of government datasheets used for milspec parts, are absolutely horrendous and should be avoided if the manufacturer has a normal looking datasheet to look at.
Linears are great, I find the AD sheets hard to read sometimes. Microchip historically has some decent ones but it’s been a while since I’ve used their parts.
Datasheets for Japanese connectors are like a circle of hell for me. Confusing and possibly incomplete dimensions, and the drawing usually looks like it was printed, scanned, and converted to jpg several times.
Ugh, I've been using a datasheet from Panasonic [0] recently, and it's been a trip. The original Japanese is all there, with a lackluster English translation below each paragraph. Plugging the Japanese in to Google Translate for the particularly bad sections usually helps. At least this version is clean, I ran across a few PDFs floating around for this part that looked like they had been run though the print-scan-jpeg cycle a few times.
Eait until you see Chinese market only parts without datasheets in English, and thenselves mostly done by not so bright engineers of sales offices of Western big semis.
TI's technical reference manuals for their MCUs are some of the best I've used, though. That's not saying much. But compared to Marvell or NXP/Freescale they're really good.
Cypress PSOC chips have a bunch of software modules you can load into them to implement various functionality, and each piece of such software includes a datasheet, just as if it was a standalone chip. It's glorious.
In "Object-Oriented programming an evolutionary approach", Brad J. Cox (the inventor of Objective-C) hoped for the birth of "software-ICs", software components that should have been widely reusable, and documented with the equivalent of the datasheets used for traditional ICs. This was his vision for OOP.
IC datasheets are dominated by the mundane but critical things like timing diagrams and electrical tolerances. Software components can't even agree on which "ICs" fit into which breadboards.
The problem I have is the sense "you're only allowed to ask for progress or give direction if you code it".
Maybe you're eg a UI/UX expert, you can help plan the direction for a project, feedback the changes needed and why, help make the project a success but just aren't competent enough to code those changes. Sure, no-one owes you their FOSS work, but it seems like we lose something if the only input allowed is from people able to do the coding.
Of course you might also just be a user. If you pay it forward, do you get to make a feature request?? If I don't code it myself ... maybe I should stop being a user if I can't contribute code?
Demanding work for free, and offering suggestions for improvements can be seen as synonymous, but they can actually be vastly different.
Projects I use heavily, like Ubuntu, I try to make myself useful offering advice on forums. That's not putting "money" in the bank of any coders but seems like it's in the spirit of FOSS - contributing what we can to create a better system.
Read your comment after a whole day of desperately troubleshooting an i2c bus on (way too) long wires. I already held an mcp2515 board in my hands thinking about how to retrofit every device with CAN, knowing this would take weeks until robustly running. Maybe adding LTC4311's will keep the system running until I truly have time for the conversion.
Thanks a lot for your comment! (Just created an account to write this!)
That's right. A current source (active) instead of a 'poor man's current source' (a resistor) can help fix problems. I had to run 400 kHz i2c over a few meters, and it really helps to drive the cables properly.
Typically, no. In SPI all lines are actively driven high or low rather than an open drain configuration. The bus master drives SCK (SPI clock), MOSI (master out, slave in) and CS (chip select), and the slave device(s) drive the MISO (master in, slave out) line(s). From an operating perspective, pullup/pulldown resistors are not required.
Now having said that, in some cases it's considered appropriate to add weak pullup/pulldown resistors to the data lines to ensure that they're in the expected state on power up and to prevent glitches from putting slave devices into weird states.
Three wire SPI is pretty common where everyone's MOSI and MISO are connected together open drain style, and the master clocks out 1s to let the slave pull the data line down for read operations. You have clock out something from the master during 4 wire SPI read anyway so it doesn't change a lot most of the time.
(From someone who is working on a rv32im implementation in Clash)
I really love writing RTL in haskell when compared to verilog/vhdl, but as a language I think it suffers from the same thing a lot of other languages do: too many ways to do the same thing. Mix that with a language that encourages meta-programming and you've got yourself a recipe for every complex haskell project basically becoming it's own little DSL. It's also often made worse because so much haskell is written by type-theorists and mathematicians churning out symbol-soup without a thought for the rest of us plebs.
IMO this is actually pretty readable and the implementation is stitched together nicely. There are some haskell/ml-isms like lenses/monad transformers/partial functions sprinkled in there that complicate a casual read-through, but if you've got a grasp on those most of this is reasonably clear.
It isn't the most complex beast (as others have pointed out, it skips things like the Zicsr and M extensions which add significant complexity) but it could serve very well as say, a companion core to some more complex piece of hardware. Perhaps one that requires reconfigurable logic that would be impractical in silicon but doesn't require realtime interrupts or fast math?
KeyMe | (Sr) Software Engineer(s) | New York, New York | Full-Time | ONSITE | key.me
KeyMe manufactures and operates a nation-wide fleet of robotic key cutting kiosks and seeks to provide fast accurate key duplication, digital key storage to prevent lockouts, and full-service locksmith services for non-key related needs.
We're looking to grow our control systems team to continue to scale out our fleet of kiosks. In the last 3 years we've grown 10x, and are on track to double our install base again next year. We offer a wide variety of engineering activities and have a great team working on some extremely challenging problems (remotely administering 5k robots on cell modems? Live configuration sync between 5k nodes and a central server with varying latency?). Our tech stack is primarily Python3|Haskell|Linux and we're looking to add some folks with Docker | RabbitMQ | FP experience (any FP language is fine).
If you're interested in working with robotics and hardware AND have an interest in functional programming, I'd love to hear from you.
Feel free to check out our job posting: https://boards.greenhouse.io/keyme/jobs/4266786002
Or contact me directly: jeff.ciesielski[at]key.me
(We will also have some additional roles opening up in the very neat future, so if you're a CV / ML / Data engineer, I'd love to hear from you too!)
In general: No, alternative HDLs don't see a lot of use, and I'd argue that we qualify as 'academia' since the ASICs are NIH funded and we tend to work with a lot of academic partners and on low-quantity R&D projects.
Having said that, every time we've deployed SpinalHDL for a commercial client they've been blown away by the results. The standard library, developer ergonomics, test capabilities, and little things like having clock domains as a part of the type system make development so much faster and less error prone that the NRE for doing it in verilog just doesn't make sense.
You get access to the entire Java and Scala ecosystem at elaboration and test time. We deploy ScalaCheck in our test harnesses to automatically generate test cases that can reduce inputs to identify edge cases. It's incredibly powerful.
reply