Thanks for pointing this out.
In the olden days it used to be fairly common to use eeproms (or just proms) with a few latches as a state machine. This way it was possible to accomplish many of the things you'd need a CPU for, but without needing one at all.
It's not too far to add an ALU and some control flow after that, hehe.
The use of microcode in CPUs began like this too. Originally, people replaced random logic in a CPU's Control Unit with a Mask ROM or a PLA. The goal was simply to avoid the trouble of building decoders from gates one by one. Although it's a form of programmable logic, few would describe these simple ROM lookup tables as "programs". Later, people pushed the idea further to build a small state machine to control the ALUs, and the microcode in ROM would then be used in turn to control that state machine. This marked the birth of "micro-sequencers". In addition to the internal use in integrated circuits, they also found applications as universal blocking blocks for custom discrete CPUs in the 1980s, examples included the AMD Am2900, Am29100 and Am29300 series chips. Building a CPU became as easy as connecting these logic blocks together and writing some microcode for the micro-sequencer. Continue the development of microcode further, and you eventually get the final form, which is a CPU within a CPU, and you may argue that the CPU is not really hardware anymore, but is driven by software.
At which point does the use of microcode makes a CPU software rather than hardware? There's really no clear boundary, so the microcode always exists on the blurring line between hardware and software. This issue was also a heavily-contented subject in several lawsuits that involved cloning CPUs, as hardware is not covered by copyright, but software is.
It's not too far to add an ALU and some control flow after that, hehe.