

Why General-Purpose Programming Languages Suck For Hardware Design - jcr
https://blog.synflow.com/why-general-purpose-programming-languages-suck-for-hardware-design/

======
oso2k
The blog post is flat out wrong makes some huge, errant assumptions.

 _To create hardware that implements the same behavior as this program, you
would need:_

 _a dynamic memory allocator_

Wrong.

 _a virtual memory system (or dozens of gigabytes of RAM, just in case)_

Why would this problem need a VM?

 _and a very complex Finite State Machine to implement the behavior of all the
methods that are called directly and indirectly (so this includes methods from
"list", "reverse_iterator", "allocator", etc.)._

Again, wrong.

If the problem is to do

 _We 'll make the assumption that there exists a system than can transform any
simple program written in a general-purpose programming language to an
efficient hardware implementation. This means that if I write a simple
program, for example one that creates a list of random numbers and outputs
them in reverse order_

That is not a very hard problem and one could very well write a compiler that
does that efficiently on FPGA or otherwise. Heck even designing a circuit
would be doable for many EE BS grads.

In general, I believe the author attacked the proof wrong. The things he
believes are necessary (malloc, VM, FSM) are inventions to circumvent hardware
limitation, not language expressiveness or other issue with the general
purpose language. Humans invented programming languages not because the
hardware is impossible to create, but because it is impossible for most
wetware (humans) to create that hardware (and/or software).

~~~
kd0amg
More broadly, it's important to distinguish what things are aspects of the
actual program semantics and what things are just common implementation
strategies. Linking in lots of STL implementation code is not strictly
necessary to get the behavior of a structure where one can push_back and walk
over with an iterator. A similar argument to the article's would say that a
compiler for a language with first-class functions cannot target a system with
memory protection.

That said, most general-purpose languages are designed with the expectation
that they'll be implemented on a conventional target machine. Compiling
something like C++ to efficient hardware is probably far into "sufficiently
smart compiler" territory. So I'm inclined to believe the conclusion even if
the proof is faulty.

~~~
oso2k
I'd almost buy your conclusion if C to HDL/Verilog compilers didn't exist [1].

[1]
[http://en.wikipedia.org/wiki/C_to_HDL](http://en.wikipedia.org/wiki/C_to_HDL)

------
Guvante
> We'll make the assumption that there exists a system than can transform any
> simple program

But then

> Before you answer, note that including a class from the STL means that
> behind the scenes your program actually includes a huge bunch of code.

Either your sample hardware translation can handle STL natively or you do not
have a simple program.

> a dynamic memory allocator, a virtual memory system (or dozens of gigabytes
> of RAM, just in case), and a very complex Finite State Machine to implement
> the behavior of all the methods that are called directly and indirectly (so
> this includes methods from "list", "reverse_iterator", "allocator", etc.)

Assuming you are allowing the full powered standard STL library. You could
provide a simplified one that avoided a lot of those problems (for instance
having a maximum size of 256 and having some different underlying guarentees
about time).

It seems the only point made is that no program could possibly take an
arbitrary program and create an efficient hardware implementation of it. And I
don't think anyone thinks that is possible.

~~~
MootWoop
> You could provide a simplified [STL] Yes, and what if they use Boost? What I
> was trying to show here was that even a small amount of C++ code can be
> actually very complex, and that makes it impossible to transform to
> efficient hardware.

> And I don't think anyone thinks that is possible. Well, you'd be surprised!
> Some fans of High-Level Synthesis seem to think that it is just that
> powerful...

------
lumpypua
This blog post sets up a huge strawman. If you're trying to translate an
arbitrary high level program into hardware, that's gonna be a pain in the ass.
OTOH there's projects that allow you to use the composition tools of high
level languages and still emit RTL code.

Haskell has:

\- CλaSH: [http://clash.ewi.utwente.nl](http://clash.ewi.utwente.nl)

\- Lava: [http://raintown.org/lava/](http://raintown.org/lava/)

\- Chortl:
[http://wiki.cs.pdx.edu/forge/chortl.html](http://wiki.cs.pdx.edu/forge/chortl.html)

~~~
jcr
And in the Haskell-Inspired department, we also have Idris:

[http://www.idris-lang.org/](http://www.idris-lang.org/)

I read something about getting Idris to emit RTL, but unfortunately, I can't
find a link to it.

Of course, synflow.com wants to promote it's own "C~" language (I haven't
looked at C~ yet, so I've got no opinion on it).

Calling the blog post a "strawman" is a bit too harsh. Instead, I'd call it
"simplistic" or a "simple example" of the well known issues with the more
"commonly used" general purpose languages --As much as we love Haskell, its
use is far less prevalent than C or C++. The blog post would have been much
better if it went into more detail on the more difficult issues of
parallelism.

~~~
MootWoop
Well I did not want to promote C~ that much in that article :-) (there is not
even a reference in the post). And yes the post is a bit simplistic, but I
wrote this in response to hardware designers who believed that any sequential
code (mostly C/C++) could be transformed to hardware given the proper tools.

I guess I could have been more precise, because indeed Haskell would not have
the limitations of sequential languages (such as parallelism limited because
of explicit memory references). Maybe I'll talk about parallelism in another
article!

~~~
jcr
Thanks for chiming in. It's always great to see the original author show up in
HN threads.

As you've already noticed, HN can be a fairly rough and critical crowd, often
too much so. Though you aimed for a more simple example, by doing so you made
the example more accessible to software programmers without a hardware
background, and hardware engineers without a software background. I enjoyed
your blog post, or I would not have posted it.

Also, with your work on C~ I suspect you are in a great position to write
something excellent about the differences between how parallelism is
approached (hardware versus "common" software). The less frequently used (but
still popular) languages like Haskell, Erlang, Idris, etc. are really the
exception that prove the rule.

I'd also love to see more on C~. As for you not promoting C~ my question for
you is, "Why not?" After all, as a founder you do have a business to build,
and I doubt anyone here would hold that against you. ;)

BTW, since you work between hardware and software, you might enjoy these
recent posts:

[https://news.ycombinator.com/item?id=7902817](https://news.ycombinator.com/item?id=7902817)

[https://news.ycombinator.com/item?id=7902575](https://news.ycombinator.com/item?id=7902575)

[https://news.ycombinator.com/item?id=7899220](https://news.ycombinator.com/item?id=7899220)

~~~
MootWoop
You're welcome :-) Indeed I was a bit surprised by some hostile comments, yet
they do make good points. I never thought that the post would be posted on
Hacker News, next time I'll take that into account ^^

Thanks for the links, I actually commented on the article on wired.com
yesterday. Regarding the promotion of C~ yes we have started doing that,
especially since we made it open source and have started to target everybody
who might be interested in designing hardware (on FPGA or ASIC).

------
etep
In a general purpose language, every line of code is implicitly sequenced,
until told otherwise (e.g. for each in parallel, do...)

In a hardware description language, every line of code is executed in
parallel, unless explicitly sequenced.

QED

I think it would be interesting, or at least instructive, to try and teach the
HDL version of thinking to an untainted student.

~~~
Tyr42
Note that that's not quite true for Haskell.

In Haskell, each line is not sequenced implicitly, though there are useful
patterns (monads) which do specify the evaluation order. IO, State, and Writer
are all sequenced, [] is depth first search, and Par is in parallel. We've got
patterns for them all.

~~~
etep
A good point, and I would agree that Haskell is a good candidate for HDL. The
following project nicely captures what I think of as a good direction (except
for dependence on the JVM which is cross platform torture):
[https://chisel.eecs.berkeley.edu](https://chisel.eecs.berkeley.edu)

------
fn
Well yes, that's why VHDL, Verilog, and the like exists.

In those languages, when you "include" a module that is "unsynthesizable", for
example the stdio library, then the entire program is rendered unsynthesizable
and unable to be compiled into logic gates -- but can still be run for
simulation purposes.

