One thing I don't quite understand about using Haskell for circuit design is that at a first glance it also seems like pure functional programming has an impedance mismatch with what the circuitry physically does. (For reference, I've programmed in Haskell extensively before.)
For example, much of Haskell directly or indirectly relies on recursion -- but this nearly nonsense when talking about silicon! There's no call stack, for one! More importantly, any algorithm requiring repeated operations like this would be inherently inefficient and undesirable in a design space where latency matters.
I have a feeling that one reason some of these alternatives haven't "taken off" and swept away the legacy languages is because they're not really that ideal either. [1]
Perhaps there's an ideal "concurrent data processing" programming paradigm waiting to be discovered that is neither like procedural languages nor pure functional languages.
[1] This is purely my uninformed layman perspective, of course. I'd love to be corrected by people who've worked in the field.
Why do you feel there is a mismatch? Any digital circuit can be modeled with a pure function that transforms an input stream of values to an output stream of values. And that is actually precisely what Clash does.
Regarding recursion. I believe Clash does support a limited form of recursion. Namely if you prove every recursive call leads to the problem size strictly decreasing. e.g. recursion on a vector of size n must mean every recursive call is applied to a vector of smaller size.
How does (non-tail) recursion work there. That inherently requires a stack, unless you limit the recursion depth and give each recursion step its own circuit.
I could imagine limiting yourself to a specific subset of haskell keeps things under control, but the mis-match seems obvious. At least from the pov "just write haskell but compile to an fpga instead of a binary".
Sorry, I edited my comment and added a part about recursion.
So you are right that not all of Haskell is compileable using Clash. But that also isn't the goal. You are using Haskell to describe your digital circuit. That generally is very different from writing a normal Haskell program. It just so happens that Haskell is good at both.
Can this be because in digital circuits state (memory) plays an important role, while its treatment in a (pure) functional language is not straightforward?
It's straightforward to implement memory in a pure function. For example if you say a digital circuit is a function from infinite list of inputs to an infinite list of outputs. You can create a "register" by simply adding a value to the start of the list. For example in Haskell:
delayCircuit xs = 0:xs
take 10 (delayCircuit [1..])
-- [0,1,2,3,4,5,6,7,8,9]
Whether a function is pure doesn't mean it can't have internal state. As another example it's perfectly fine to have a completely pure function that has internal state:
withInternalState a = let go = get >>= \s -> put (s + 1)
in fst (runState go a)
What is important for purity is that this internal state doesn't leak outside the function. Which it doesn't for circuits. Outside of things like cosmic rays, heating etc a circuit is completely pure in reality.
> Whether a function is pure doesn't mean it can't have internal state.
This seems to be in direct contradiction to what is written on Wikipedia[1], specifically that "the function return values are identical for identical arguments".
If it has non-trivial internal state, how can it return identical values for identical inputs?
Sadly your example eludes me as I don't know any Haskell so it reads like line noise.
It does have identical return values for identical arguments. That doesn't mean the function can't have internal state. Maybe this is more clear in pseudo code:
That function example is not allowed in Haskell. It's pure functional throughout, not just at "function boundaries".
I agree that conceptually a language can be made where only function boundaries are required to be side-effect free, and internally "anything goes" as long as it doesn't pollute the outside world. This might be a good model for circuit design, especially if using something like an IO monad to store persistent state across clocks, such as flip-flops or registers.
However, this is not what Haskell does, at all. There are no mutation functions like "fill(x)". You have to create the buffer filled with x right from the beginning.
More importantly, in Haskell that buffer is defined with a recursion, so it looks something like: (x,(x,(x)))
Effectively it is an immutable linked list. Any modification involves either rebuilding the whole thing from scratch (recursively!) or making some highly restricted changes such as dropping the prefix and attaching a new one, such as (y,(x,(x))).
Haskell is not LISP, F#, or Clojure. It's pure and lazy, which is very rare in functional programming. It's an entirely different beast, and I don't see how any variant of it is a good fit for circuit design, which is inherently highly stateful and in-place-mutable.
Such a function is perfectly allowed and even encouraged, see here where I allocate a mutable array with undefined values and write and read from it in a pure function. Basically replicating the pseudo imperative code I have written before:
Monadic computations are very much in the spirit of Haskell. I don't think the Haskell community view the ST monad as a hack. It's one of many tools in the box of any Haskell programmer.
What I was thinking of was more like a linear shift feedback register. It has zero inputs and outputs random bits, thanks to its non-trivial internal state.
Or as we were saying, a register. Could for example be a function which takes a read/write flag and a value, writes the value to the internal state if write flag is set, and returns the internal value.
edit: In my softcore I have a function to read/write registers, it modifies global state rather than internal state so would not be pure either.
edit2: I guess my point is, for me "state" is something that is non-trivial, and retained between invocations. Your example is trivial and not retained (it's completely overwritten always).
You are seeing a single invocation as running your circuit for a single clock cycle after an arbitrary amount of clock cycles. That's the wrong approach. A single invocation of a circuit takes a stream of values and produces a stream of values. Every invocation start from clock cycle 0.
So yes if your circuit depends on external signals driven by registers living in a different circuit you need to pass those in as inputs. Essentially circuits are composeable just like functions.
A linear feedback shift registers always produces the same stream of values no matter how many times you run it. It's completely pure.
Still, I would be very interested to see how one would implement a shift register, lets say something like the 74HC165, so I can see how the pure functions interact with the register state.
Runnable using replit. To make it as simple as possible I didn't use Clash and used types available in prelude Haskell (Bool instead of bit, 64 bit integer as register state etc. Every single element in the list represent a value coming out of the register on the rising edge of a clock cycle. You can easily extend it if you want latches, enables etc. But that doesn't really change the core point of the code.
Much appreciated. I think I get the gist of it, even though it looks very complex compared to the Verilog counterpart. To be fair, my lack of Haskell knowledge doesn't help.
But it's really helpful to get a feel for the different approach.
I mean Haskell is definitely and acquired taste if you haven't used other languages in the same style before. Also note that writing this in actual Clash would be a one liner. Because stuff like registers and shifts are available as part of the standard library.
I've tried to get into Haskell, and while I don't have a huge problem with the overall concepts most of the time, although a bit alien at times, my brain just can't seem to handle the syntax.
I've found myself thinking differently about code and using a lot more functional-ish concepts when writing my "normal" code though, so I do like the exposure.
So your point is you can have temporary variables in pure functions. That's fine.
However how do you implement registers? Do you have to pass around the "register file"? And if so, how does that work?
Like, how would a parallel-to-serial shift register look like? Ie an asynchronous latch updates the internal shift register and an independent clock shifts out the values.
Why not something declarative, like Prolog (bonus: sounds like it could be the advanced version of Verilog) or HCL (Hashicorp Configuration Language, as used for Terraform)?
I've been thinking of writing a tool to use the latter for EDA, so you could build libraries of composable reusable blocks, take application note type examples directly rather than re-draw, etc. and (what motivated me initially) store it all in git. FPGAs seem at least as good a fit.
This has been tested several times, and used be a fairly common EDA academic reserach subjest. (As well as other languages like Haskell, SML). I first saw Prolog used for HW design in 1995 or so. A bit newer example is this:
https://www.researchgate.net/publication/220760154_A_Prolog-...
One important area is digital systems testing. There stuff like Prolog seems to me a better fit than for actual HW description. For test cases I want to efficiently describe the behaviour, esp inputs, expected outputs and changes to state.
For the design, I'm not only interested in that I get the correct behaviour (functionally correct), but also _how_ I get the behaviour. How much resources are needed, what does the routing look like, how many metal layers does it to route, how much power will it consume, how fast can I clock the design.
For example, much of Haskell directly or indirectly relies on recursion -- but this nearly nonsense when talking about silicon! There's no call stack, for one! More importantly, any algorithm requiring repeated operations like this would be inherently inefficient and undesirable in a design space where latency matters.
I have a feeling that one reason some of these alternatives haven't "taken off" and swept away the legacy languages is because they're not really that ideal either. [1]
Perhaps there's an ideal "concurrent data processing" programming paradigm waiting to be discovered that is neither like procedural languages nor pure functional languages.
[1] This is purely my uninformed layman perspective, of course. I'd love to be corrected by people who've worked in the field.