
An emulator for a single-instruction (NOR) CPU - wlrm
The NOR Machine<p>The idea is to use just one function to compute everything. It suffices to do this for bits: bytes consist of bits, and if we can compute bits, we also can compute bytes, and everything made up of bytes. That is, everything.<p>There are sixteen boolean functions of two arguments. Two of them are special: NOR and NAND. The other fourteen functions can be computed using only either NOR or NAND. We’ll use NOR.<p>NOR’s truth table is as follows:<p><pre><code> 	===== ========
 	Input  Output
 	===== ========
 	 A B  A NOR B
 	----- --------
 	 0 0     1
 	 0 1     0
 	 1 0     0
 	 1 1     0
 	===== ========
</code></pre>
https:&#x2F;&#x2F;github.com&#x2F;yuanxinyu&#x2F;NOR-CPU
======
adamnemecek
This might be of your interest
[https://en.m.wikipedia.org/wiki/One_instruction_set_computer](https://en.m.wikipedia.org/wiki/One_instruction_set_computer)

~~~
TazeTSchnitzel
Also:
[https://github.com/xoreaxeaxeax/movfuscator](https://github.com/xoreaxeaxeax/movfuscator)
(which makes x86 single-instruction)

~~~
edwintorok
How about computing with 0 instructions? (aka Intel MMU's fault handling is
Turing complete):
[https://github.com/jbangert/trapcc](https://github.com/jbangert/trapcc)

------
wallacoloo
One-instruction computers intrigue me because of the possibility of making the
processor out of a ridiculously low number of transistors. To that effect,
I've always wanted to implement one that _doesn 't_ require a hardware adder
for the instruction pointer. It seems like it should be possible. For example,
each clock cycle the CPU would take the NAND of the bit pointed to by the IP
and the bit adjacent to it (via flipping the LSB of the IP) and use the result
to determine a new value of the IP using very simple logic (E.g. IP = IP<<1 |
NAND(mem[IP], mem[IP ^ 1]) ). But surprisingly, I've never seen this done and
the last time I approached it I didn't have the knowledge needed to choose an
appropriate scheme and prove its usability.

~~~
aaeegg_20160625
A few processors, such as the TI TMS1000 and the Sharp SM4, used a maximal-
length LFSR for the program counter. This may be more practical than it
sounds, if your memory subsystem doesn't penalize out-of-order accesses. You
could write code normally, pretending that instructions are at contiguous
addresses, and then scramble the object code into LFSR sequence at link time.

As another note, for low gate count processors, you may want to look into bit-
serial techniques, as used in the PDP-8/S.

~~~
nickpsecurity
"used a maximal-length LFSR for the program counter."

I found Wikipedia entry on MLS-LFSR. How do those end up useful in program
counters?

"you may want to look into bit-serial techniques"

You talking about the 1-bit ALU's like these?

[https://www.cs.umd.edu/class/sum2003/cmsc311/Notes/Comb/oneb...](https://www.cs.umd.edu/class/sum2003/cmsc311/Notes/Comb/onebitALU.html)

Making a comeback in another field:

[http://www.caesjournals.org/uploads/IJCAES-
CSE-2012-132.pdf](http://www.caesjournals.org/uploads/IJCAES-CSE-2012-132.pdf)

~~~
aaeegg_20160625
You wouldn't normally use an LFSR as a program counter, because it's more
convenient to have the instructions execute from address 1, 2, 3, 4, etc
rather than (say) 79, 30, 61, 122, etc. But an LFSR has fewer gates than an
adder, which is why it was used in those old 4-bit processors.

The last diagram in the page you linked shows three 1-bit ALUs chained
together to make one 3-bit ALU. As I understand it, the PDP-8/S used a
_single_ 1-bit ALU to sequentially calculate each bit in a 12-bit output word,
taking 12 cycles to emulate a 12-bit ALU. For the 3-bit example, imagine there
being a single ALU which acts first as the rightmost ALU in the diagram, and
then once you have the carryout bit it acts as the middle ALU, and finally it
acts as the leftmost ALU.

~~~
nickpsecurity
Appreciate the tip.

------
keenerd
> Two of them are special: NOR and NAND.

No, IMP does this as well. It has mostly been useful for memristor logic.

    
    
        A B   IMP
        0 0    1
        0 1    1
        1 0    0
        1 1    1
    

I have a suspicion there is a more general mathematical law that says any
boolean rule with a 1-of-4 split (either exactly one true or exactly one
false) can be used for general computation, if you can make a NOT gate out of
it. (Since AND/OR obviously is not workable.) So there should be six possible
universal logic gates.

~~~
superobserver
IMP is really NOT( A AND NOT ( B ) )

A → B |- ~(A & ~B)

1) A → B [by assumption] 1

2) A & ~B [by assumption] 2

3) A [conj. elim. 2] 2

4) B [imp by 1,2] 1,2

5) ~B [conj. elim 2] 2

6) B & ~B [conj 4,5] 1,2

7) ~(A & ~B) [reductio ad absurdum 2,6] 1

{Thus by assuming only A → B we prove that we can derive ~(A & ~B).}

From there you can also trivially prove that ~(A & ~B) |- A → B; thus, they
are tautologous and so given the primitives ~ and &, implication is not quite
as "special" for a Universal Turing machine. This is merely to show,
independent of any construction using only IMP, one can trivially reduce it to
more primitive components, thus undermining any claim to a "special" status in
the context of universal computation.

~~~
kmill
Rather than deducing it, you can also look at the truth table, see it is 0
exactly where B is not true and A is true, and get not(A and not B). Then, we
can conclude there is a proof such as yours from the completeness theorem, I
believe.

~~~
superobserver
Yes, but it's more fun and edifying to prove it. :) It's so rare I get an
opportunity to whip out propositional logic and formal proof methods that I
hardly balk at the opportunity. ( _hears groans from non-nerds_ )

~~~
Breakthrough
If you use DeMorgan's law on your result, you can simplify it further to ~A |
B

------
userbinator
Not exactly the same thing, but the Apollo Guidance Computer is built entirely
out of 3-input NOR gates:

[http://klabs.org/history/ech/agc_schematics/](http://klabs.org/history/ech/agc_schematics/)

------
nickpsecurity
Neat CPU. That and adamnemecek's comment made me do a quick Google to see if
there's interesting results in OISC that you all might enjoy. Found an
improvement on SUBLEQ that's pretty impressive. Original first and newest
below:

[http://hasith.vidanamadura.net/projects/subleq/](http://hasith.vidanamadura.net/projects/subleq/)

[https://arxiv.org/pdf/1106.2593v1.pdf](https://arxiv.org/pdf/1106.2593v1.pdf)

"Our test results demonstrate that computational power of our Subleq OISC
multi-processor is comparable to that of CPU of a modern personal computer."

I didn't see that coming for OISC projects. Pleasant surprise. :)

------
beautifulfreak
You might enjoy the Matrix Logic series of books by August Stern, which
develop the idea of logic operations as vector multiplication. The NOR
operator would be represented by a 2x2 matrix, [10|00], and an operation such
as 1 NOR 0 would be the matrix multiplication [01]x[10|00]x[10]^T. (Scalar
operand 1 is represented by [01], and 0 by [10], in both left and right hand
forms.) It's simple but deep, when fully extended. Treated as linear algebra,
logic is a lot more sensible. There are surprises, too. For instance, a
statement like [01]x[01|11]x[01|11]x[10] makes perfect sense: 1 OR OR 0. So
what is OR OR? Is it useful? This wouldn't be obvious from the usual way of
computing OR as addition mod 2, and the messiness of carrying the extra digit
in 1 OR 1. Determining the truth value of statements by direct computation,
without reference to an external truth table, gives mathematical certainty to
the outcome. Multiple statements can be compounded with tensors, giving
operators that are 2^n in dimension. Complementarity plays a huge role in
showing coherence of the matrix system of logic. There's much more, but that's
the general idea. I was reminded of Matrix Logic, because Stern shows a
computation done strictly with NAND, plus NOT, to illustrate how many
operations must be done, where using different operators would have been much
more efficient.

------
dorfuss
Reminds me of a book and Coursera course called "From NAND to Tetris" by Noam
Nissan and Shimon Shocken. You can learn how to build a computer from basic
logic gates, the web site below provides a hardware simulator in JAVA and pre-
made components.

[http://www.nand2tetris.org/](http://www.nand2tetris.org/)

[https://mitpress.mit.edu/books/elements-computing-
systems](https://mitpress.mit.edu/books/elements-computing-systems)

