
The Boundary Between Hardware And Software - Tsiolkovsky
https://www.gnu.org/philosophy/hardware-software-boundary.html
======
rthomas6
Hi, I write HDL for FPGAs in my day job. Programming an FPGA in C seems like a
fundamentally bad idea to me.

C is a procedural language. Most languages are procedural to some degree, but
C is entirely sequential. Digital hardware, unlike a C program, is entirely
concurrent. When you write HDL for hardware, you're not really writing a
program, you're _describing_ the hardware layout itself. Really it's closer to
making a logic gate or circuit diagram. HDL doesn't "run" like a software
program. It just exists on the FPGA. That's the design, and it doesn't change
once it's programmed on the chip. As such, the only "procedures" in hardware
are from causality from something that happened in the past in the same
hardware, either from a different part of the circuit, or the same part in a
clocked design.

So what would be a good tool to describe hardware? I think (preferably
immutable state) functional languages would be much closer, because they're
much less procedural. You can more directly describe complex logic that takes
some input, and gives some output, in a manner such that sequence of events
does not matter. This would be much more powerful when mapped to an FPGA,
because everything that didn't rely on a previous result could happen at the
exact same time. And THAT much better utilizes the power of hardware.
Implementing a long chain of sequential events on hardware is... not the
smartest way to design hardware.

~~~
theaeolist
Why 'strict'?

~~~
Guvante
Because hardware is fundamentally strict, when you read a value you read it.

~~~
agumonkey
Is it ? aren't there differential circuits with history that won't activate
the subsequent circuitry in case the read value didn't change ?

~~~
Guvante
The order of operations is strict, when you specify you want something you
grab it. In contrast Haskells evaluation order does not match the definition
order necessarily.

------
cantankerous
"As for the HDL code itself, it can act as software (when it is run on an
emulator or loaded into an FPGA) or as a hardware design (when it is realized
in immutable silicon or a circuit board)."

This statement comes with some pretty heavy caveats. You can't just compile
any arbitrary piece of software to hardware. At least not directly...not
without an implied runtime package to support your program on hardware.
Software developers tend to take this stuff for granted, especially at the
high level. If you're writing C, where is your stack living? Do you plan on
using the heap? Those need to go down to hardware somewhere.

This is the big trouble with FPGAs+Software Developers. Folks at the software
level can (and probably tend to) fully utilize the Turing completeness (or
anything power above regular language recognition at least) in their
languages, inadvertently or not. When you want to support a computation that
requires more power than finite automata can provide, you're going to need to
bundle in things like memory management functionality, and maybe even a proto-
processor. The rub is that you already have these things in literally any
computer, and they're likely going to be faster than what you can do on an
FPGA. For now anyway.

FPGAs are a tough nut to crack from the bottom up and the top down. If they're
going to turn into a massively disruptive technology, we're going to need
better tools for them and maybe some different thinking about how we program
for them.

~~~
theaeolist
Indeed, but you can compile a lot into h/w. For example, you can compile
recursive and higher-order functions and ground-type mutable state. You can
also have a type system that tells you at compile time whether your program
can be compiled or not.

Here are some entry points to the Verity/GOS project:

* Paper describing the theory: Dan R. Ghica, Alex I. Smith, Satnam Singh: Geometry of Synthesis IV: Compiling affine recursion into static hardware. ICFP 2011: 221-233

* Compiler from Verity to HDL: www.veritygos.org

------
twoy
Although I haven't developed hardware, I feel that hardware development
suffers from rigidity. Is there noticeable progress of development not
manufacturing in last 10 years? It seems that hardware developers often
complain about VHDL and Verilog while continue to use them because they are
only actual options vendors support. I found netlist format such as EDIF seems
assembly language for hardware, but I also heared that vendors need to produce
their own custom netlist from souce of VHDL or Verilog. I suspect that
processors could evolved more than pace of Moore's law if tools have
accumulated developer's creativity.

------
jpt4
The hardware level cross-over between logic and physics is of exceptional
interest to me, and one of the drivers of my personal investigation of
computing more generally.

It is my current understanding of FPGAs that they allow for reconfigurable
_logical circuits_ , but still implement those functions using fixed _spatial
circuits_ , primarily by dispatching signals down routing traces as specified
by LUTs. These spatial circuits, in particular the routing paths, as the
physical underpinnings of an FPGA are non-reconfigurable, and form the
proprietary core of the product. I have three question, if anyone could
provide insight:

(1) Is the above notion correct, or does the act of burning in a bitstream to
an FPGA actually physically reroute traces within and between blocks of logic
gates?

(2) The latter option in (1) would presumably still require some sort of
supporting hardware, itself non-reconfigurable, and subject to proprietariness
correct?

(3) Are there reconfigurable devices that offer complete, self-hosted
reconfigurability? By which I mean that an unconfigured device must first be
bootstrapped (via an external tool) to contain the bitstream burning auxiliary
circuitry, but this framework remains no more fixed than the rest of the
device. Furthermore, the bootstrapped circuits can update themselves, by
creating second-generation support circuitry elsewhere on the device, which
access and alters the first generation configuration.

Thank you in advance.

Edit: (4) Given the answers to (1), are there any devices that have
reconfigurable routing traces?

~~~
mng2
Re (3), FPGA Partial Reconfiguration has been a standard feature on high-end
parts for a long time. However I've heard (never used it myself) that the
present implementations are of limited utility. Most applications that can
afford a high-end FPGA can afford an auxiliary processor for management
duties, so self-reconfiguration remains something of a curiosity.

~~~
jpt4
Thank you. To your knowledge, does any device have the property described in
"Edit: (4)..."?

~~~
mng2
I'm not sure what you mean by 'physically reroute traces'. If you mean,
physically move a metal trace across the chip, that's not really possible at
this point.

One-time programmable devices (GAL and PAL devices that are essentially
obsolete, most FPGAs are SRAM based these days) require physical fuses to be
blown in order to program functionality. I suppose that's a physical change,
but one that's irreversible.

------
JoshTriplett
The most relevant part of this mail seems like the announcement of the
reverse-engineering success that allows programming an FPGA with a FOSS
toolchain. That that toolchain currently uses C seems like a minor
implementation detail; if the bitstream format is understood well enough to
generate it, it should be possible to use an HDL to as well.

~~~
jesuslop
But the next generation of fgpa obsoletes the reverse engineered insights.

------
NTDF9
"software is the operational part of a device that can be copied and changed
in a computer; hardware is the operational part that can't be. This is the
right way to make the distinction because it relates to the practical
consequences."

I disagree.

Operation of computers cannot be classified simply into hardware and software.
There are 3 important parts: \- Hardware \- Software \- The interface
(basically the instruction set)

Another way to view ISAs is like a "Contract" between hardware and software.
Hardware promises that certain instructions will provide certain behavior.
Software acknowledges this and uses these instructions to build software on
top of it.

~~~
ericfontaine
As mentioned earlier, FPGA's do not fit this 3-part distinction.

In addition, CPU microcode doesn't fit this distinction, as the microcode is
software (not hardware) that is hidden behind the ISA's interface.

Also, I can think of computers that only have hardware and data input (without
any programs or instruction set). Basically number crunchers that run a fixed
program (e.g. DSP's or non-general GPU's that perfom a specific function),
with only the data input that can change.

You bring up a good point by acknowledging "The Interface". But I would
reclassify into a different set of parts, something along the lines of:
"Interface" \- "Implementation". I/O can be programs and/or data that
interface with the implementation which can be hardware and/or software, or
another whole computer. (This is potentially recursive classification, so it
can include virtual machines.)

------
zokier
FSFs stance on firmware is one of the areas where the situation gets somewhat
silly: [http://lwn.net/Articles/460654/](http://lwn.net/Articles/460654/)

~~~
ericfontaine
Clever, they insert an additional chip between the cpu and wifi whose sole
purpose is to load wifi firmware, and thus make it non-update-able "efectively
circuitry" to bypass the silly FSF rules.

Using the same logic, what if someone ran netflix's proprietary DRM
decryption-specific code in a seperate co-processor that interfaced with the
main CPU only via the w3c's Encrypted Media Extensions? Assuming this code is
never updated and runs isolated from the CPU, according to the FSF silly rules
about updatability, they would be OK with this "effective circuitry" (of
course ignoring their concerns about DRM itself). But if that same code would
be run inside firefox's sandbox for EME plugin's, it would be considered
dangerous non-free software.

------
sebastianconcpt
The best way I found to think about this is best summarised by Alan Kay:
"Hardware is crystallised software".

I think is the best metaphor because you can think of the crystal as immutable
software.

FPGA's are like melting crystals.

They can eventually mutate under some circumstances until they crystallise
again (crystals/temperature vs. FPGA reprogramming)

~~~
gshrikant
I find it more natural to think of software as different internal states of
hardware or driving the state changes, at least.

For FPGAs, the "software" defines the hardware (in a sense) so there is a very
tight integration between the two - to the extent that there really isn't any.

------
ericfontaine
"Firmware that is installed during use is software; firmware that is delivered
inside the device and can't be changed is software by nature, but we can treat
it as if it were a circuit."

So any code, gate patterns, or microcode (preinstalled or not) that is
potentially update-able (regardless of difficulty) would be considered
"software". But any code, gate patterns, or microcode that is preinstalled
into some rom or fuse memory inside the chip or that is otherwise impossible
to be updated would be considered "hardware".

If an entire OS microkernel (like seL4) is burned into some rom inside the
same chip with cpu and is physically impossible from being updated, it would
be considered hardware by this gnu.org definition, but should be considered
software according to wikipedia's definitions:

"[Software] is any set of machine-readable instructions that directs a
computer's processor to perform specific operations." "[Hardware] is the
collection of physical elements that constitutes a computer system."

Theoretically anything in software can be implemented as hardware and vice-
versa. But should updateable-ness really be the differentiating factor? Now
lets say that seL4 was compiled into VHDL to put in a fpga. (I am using seL4
as an example because it is small and written in a functional language which
would be easier to implement in hardware than a macrokernel written in a
sequential language like C, and because it performs a significant function of
a computer, even though it may be impracticable to implement in a hardware
description language). Is that hardware or software now? I would say hardware,
because it is now a collection of physical elements connected together. But
gnu.org's definition only seems to care about the updateability-ness.

What if instead the seL4 code was on a rom that couldn't be updated
internally, but could instead be physically removed from a socket or
unsoldered and replaced (even if not intended to be). I'm not even sure
whether this gnu.org definition would consider it hardware or software, but I
would guess using the strict updateable-ness criterion alone, by having some
manner to be replaced, it would be considered software.

Sometimes organizations choose definitions to provide realizable goals. FSF
uses this updateable-ness distinction, and then can for example say that the
libreboot X60 runs entirely free "software". But of course my libreboot X60
has a bunch of processors running un-updateable proprietary code, such as the
wifi, disk, and graphics chips, even though they are interfacing to the cpu
using open source drivers. Also FSF seems to tolerate the preconfigured
microcode already on the cpu, but won't tolerate microcode updates loaded at
boot. But by using this non-updateable-ness criterion, FSF has a much more
manageable but achievable conditions to meet the goal of free computing rather
than having to "free all the things".

Maybe FSF is attempting to find definitions relevant to libre-ness. But I
don't think making up their own new definition of hardware and software is
necessary for achieving goal of free computing, nor terribly useful. I think
they should just stick to the 4 freedoms (to run the program, to study how the
program works, to redistribute copies, to improve the program) and might be
better focusing on using the word "program" instead of having to make up their
own new definitions for hardware and software. The word "program" transcends
the distinction between hardware and software.

~~~
fchmmr
There is no firmware at all on the ath9k wifi chipsets. The firmware in the
HDD/SSD is the same issue on all computers, not just libreboot. The graphics
chip doesn't contain any firmware, instead the "video bios" is included in the
SPI chip alongside coreboot or libreboot. In the case of libreboot, all
current targets have free Video BIOS implementations, referred to in coreboot
as "native graphics initialization".

~~~
ericfontaine
"There is no firmware at all on the ath9k wifi chipsets."

No, there is firmware inside the wifi. Just looking at a specific chip ath9k
wifi device: [https://www.qca.qualcomm.com/wp-
content/uploads/2013/11/AR94...](https://www.qca.qualcomm.com/wp-
content/uploads/2013/11/AR9462.pdf) I see it has a "32-bit Tensilica Xtensa
CPU" and this CPU is running some proprietary firmware on the embedded Code
ROM. Although this firmware cannot be updated, is completely isolated from the
main x86 CPU in my libreboot X60s, can't directly access any data outside of
the physical miniPCI board, and only interfaces to the rest of computer via
open-source x86 driver code on the main CPU, this embedded ROM code
controlling the embedded RISC processor is however still proprietary firmware.
This is the same deal with the microcontrollers inside the HDD/SDD and
graphics chipset, which all have proprietary embedded firmware which is
interfaced with by the opensource x86 drivers. Firmware != driver.

~~~
fchmmr
The PDF that you linked to is talking about AR9462. Libreboot machines
typically use chips with the AR9285 chipset.

~~~
ericfontaine
Ok, I simply linked to that as an example of a device that can interface with
the free ath9k driver. Looking at [https://www.qca.qualcomm.com/wp-
content/uploads/2013/11/AR92...](https://www.qca.qualcomm.com/wp-
content/uploads/2013/11/AR9285.pdf) I can't really see any closer detail
inside of the black boxes labeled "Baseband PHY and Wireless MAC" and "Host
Interface" but I can make a very educated guess that they are running some
very simple RISC and/or DSP with code off a ROM responsible for performing all
the non-analog functions necessary for wireless media access control (e.g.
error recovery, finding free channels, Rx/Tx buffering) and for interfacing
with PCIe bus, outside of functions that handle higher level of the networking
stack that are more practically implemented in the ath9k driver code running
in the x86 cpu.

