Hacker News new | past | comments | ask | show | jobs | submit login
Universal Logic Gates (2015) (allaboutcircuits.com)
75 points by joycian on June 16, 2019 | hide | past | favorite | 36 comments



Of the 16 possible 2-input/1-output gates:

* 8 of them are linear, in that the output is A xor (B and X) xor (C and Y). Any combination of these can only ever generate linear combinations of their inputs, thus they cannot be universal.

* 6 of them are monotonic, in that X op Y <= X' op Y' if X <= X' and Y <= Y'. Any combination of these can only ever generate monotonic functions of their inputs, thus they cannot be universal.

* 4 gates (0, X, Y, 1) are both linear and monotonic.

* The 6 gates which are neither linear nor monotonic are universal.


Slightly off-topic but it's clear from the table that you can simulate any 2-in, 1-out gate with four bits of memory and two address lines. Which is exactly how "gates" are implemented in FPGAs (albeit with more inputs and outputs).


You can almost make a continuous variable version of all the logic gates, for variables A and B which are between 0 and 1, but I couldn't quite get it to work. I tried: (I'm using => to mean "maps to")

!A => 1 - A

A && B => AB

A || B => A + B - AB

If you do this, some of the rules of logic work fine:

!(!A && !B) => 1 - ((1 - A)(1 - B)) = A + B - AB, which is the same as the expression for A || B.

But A && A obviously doesn't work, you get A^2 instead of just A.

And to relate this to the article, the rule to convert NAND into OR doesn't work either, you get

(A NAND A) NAND (B NAND B) => 1 - ((1 - A^2) * (1 - B^2)) = A^2 + B^2 - (AB)^2

Now all of these do evaluate to the right answer when A and B are either 0 or 1 but not when they are some value in between.

You can also try letting A && B => sqrt(AB) but you get similar issues.


The generalization of continuous variable version of boolean logic can be achieved in various ways as far as I know. One way is Cox's theorem [1], which forms the basis of Bayesian probability theory, and will give you consistent logic and reduces to the standard rules of boolean logic in the limit of 0/1.

[1] https://en.wikipedia.org/wiki/Cox%27s_theorem


The way I came up with my attempt was just to notice that when you are looking at the probability of two independent events happening together, you multiply them; and if you want the probability of A or B or both, you add them and then subtract the probability of (A and B). I guess the reason the multiplication rule doesn’t work for (A && A) is that those two variables are clearly not independent, since they’re the same variable.


This sounds like the limit of many-valued logic:

https://news.ycombinator.com/item?id=20180599


This article shows more universal gates than the NAND and NOR that are usually discussed. Are any Hacker News users knowledgeable about the topic? Because I was wondering why I never heard about these universal gates before. Almost always, only NAND and NOR are discussed.


Although I have a soft spot for abjunction (as discussed in my wall-of-text comment deeper in the thread), my favorite universal gate is the three-input conditional: x if y else z, or y ? x : z in C or JS syntax, the element from which BDDs are built. (Like abjunction, it's falsehood-preserving, and it's also truth-preserving, so unlike NAND and NOR, it is only "universal" if you have access to constants TRUE and FALSE.) As Darius Bacon has pointed out, one way to generalize it to continuous variables, or variables over arbitrary fields, is as the lerp operation commonly provided by GPUs: yx + (1-y)z, or, without using constants, x + y(z-x). In GF(2), subtraction and addition are both XOR, so that ends up being x ^ y & (z ^ x), which is a correct definition of the operation for bits.

The universal reversible gates like the Fredkin gate and the Toffoli gate are also really interesting and potentially important.

Designing circuits with separate combinational gates and registers is fairly easy, but it treats the temporal separation of cause and effect as a sort of defect in the universe, and consequently we have to be cautious about glitches (when we don't clock) and metastable states resulting from inadequate setup and hold times (when we do clock). Suppose we continue to discretize time with clocks just as in the current RTL paradigm; can we find a finite state machine that is minimal and universal in some NAND-like sense, and also admits a reasonably tractable design methodology? Would it have two binary inputs, one binary output, and one bit of state? Is it just a J-K flip-flop?


I would guess that NAND and NOR might better lend themselves to practical physical implementation.

Also: If you don't limit yourself to 2-input/1-output gates, there's loads more universal gates. Many 3-in/3-out reversible gates are universal [1]. I really have a soft spot for the Fredkin, it just seems so elegant.

[1] http://www.thefullwiki.org/Three-input_universal_logic_gate


The monotone, linear, etc. functions are called clones and the clones form a lattice called Post's Lattice

https://en.wikipedia.org/wiki/Post%27s_lattice

Given several logic gates, if their most recent common ancestor in the lattice is the clone of all boolean functions, then that set is universal. This provides a convenient prescription for deciding universality for a new k-input gate.


It really is about physical implementation. In TTL "single input NAND" and invertor is the same physical structure and extending it to multiple inputs involves just adding additional emitters to one BJT structure. For CMOS logic, NOR is the smallest two-input "true" gate, because PMOS transistors have to be larger than NMOS and two series connected MOS transistors can be collapsed into one structure with multiple gates that is significantly smaller than two connected transistors.


> I really have a soft spot for the Fredkin, it just seems so elegant.

I agree. It’s so simple, yet it easily leads to other seeminly-more-complicated gates, simply by ignoring some of the inputs or outputs.

Plus it has the feature that the # of on inputs always equals the # of on outputs. And the feature that it’s reversible. Really a cool gate all around.


This is some basic freshman or sophomore EE stuff, so that's probably part of the reason why it doesn't get brought up. The other universal, or functionally complete, gates in the article don't get mentioned because they aren't standard parts you can buy.


I'd recommend reading Matrix Logic, by August Stern. It's an approach to logic using vectors, aka matrices. This article doesn't answer why the math works out such that some gates are universal, but Matrix Logic does.


Not especially knowledgeable, but one factor in the relative lack of interest in other gates might be that NAND and NOR are the only two universal gates which treat their inputs equally. The others ("A and not B", "B and not A", "A or not B", "B or not A") behave differently if you swap A and B, whereas NAND and NOR are symmetrical under that operation.

Edit to add: Incidentally, I do remember reading one logic text where boolean algebra was defined in terms of, and built up entirely from, the "B or Not A" gate, which the text called "if" or "->"

It is only false when A is true and B is false, which matches what logicians expect from a conditional statement, for example: "if it's raining then it's cloudy" is only contradicted by the combination of raining but not cloudy. If it's not raining, then the statement is not falsified whether there's clouds or not.


Personally, I don't find them that interesting. AND, OR, NAND, and NOR are all non-linear. However, you can implement any logic function with XOR and AND, which is essentially modulo-2 addition and multiply. So the whole world of GF(2) algebra applies, and that is how error correction and encryption are analyzed. Goest thou, and study GF(2).


You don't need XOR and AND. You ONLY need NAND (or NOR).

NAND/NOR are universal in that they can single handedly construct all other gates. XOR and AND is not a minimal construction.

This is well below the level of understanding to get to GF(2).


See my answer above. Even if a single universal gate is enough if you disregard speed and cost (area and power consumption) when you implement physical logic circuits speed and cost are essential so you cannot use a single type of universal gate but you must use a small number of basic gates, because each of them has a better implementation from diodes and transistors than an implementation made as a cascade of a single type of universal gates.


Boolean logic says nothing about implementation.

They're entirely different levels of abstraction.

https://en.wikipedia.org/wiki/Functional_completeness This line in particular: "In digital electronics terminology, the binary NAND gate and the binary NOR gate are the only binary universal logic gates."

Saying that insight isn't useful (that you can reduce any Boolean equation to a single universal gate) - when it is a cornerstone in modern equivalence checking, Boolean SAT solving, and minimization of combinatorial equations) is really, really missing some very key things...

It is the kind of fundamental cornerstone that allows modern digital computers to even exist.


Doesn't equivalence checking normally use BDDs, and SAT solving normally use separate AND, OR, and NOT? I mean the algorithm to reduce an expression to CNF is a pretty simple set of local rewrite rules with AND, OR, and NOT; I could be wrong but I feel like the NAND-based equivalent of CNF is quite a bit hairier.

As for minimizing combinational logic, don't you want to use the full set of gates available to you in that case, taking into account their cost, and maybe trying to minimize or at least limit circuit depth? It seems like just using NAND would be kind of a step backwards in that case. Karnaugh maps give you DNF; PALs/GALs want DNF; the fact that your DNF PAL circuit might just be two layers of NAND (equivalent to a layer of AND feeding into an OR layer) is kind of an implementation detail, isn't it? I mean you would configure the circuit in the exact same way if it were made out of physical AND and OR gates.


Of course after you do the minimization, you move to a technology mapping step that takes these things into account.

"the fact that your DNF PAL circuit might just be two layers of NAND (equivalent to a layer of AND feeding into an OR layer) is kind of an implementation detail, isn't it?"

No. That's absolutely, 100% fundamental.

It's called "Canonical Form" https://en.wikipedia.org/wiki/Canonical_normal_form

It's canonical for a reason.


It seems that you didn't understand my comment well enough to respond to it, maybe due to a lack of domain knowledge.


Sure, what's a good reference?


Also, I forgot to mention in my first answer above that it is false that "XOR and AND is not a minimal construction".

The following 4 sets of logic operations (and many others) are minimal: 1. AND, NOT 2. AND, XOR 3. NAND 4. NOR

So AND + XOR is a minimal set, because you can make NOT from a XOR where one input is true.


If you make a NOT from an XOR, and already have AND, you've made...

NAND!

Which is why it isn't a minimal set - because you're using AND and XOR to get NAND, which (by itself) is universal!


The question about a minimal set of primitives arises in many domains, not only in Boolean functions, for example in structured programming or in a minimal computer instruction set.

In all such domains it is possible to find a set composed of only one primitive and some people think that that is the minimal set because it has a minimum number of primitives.

Nevertheless, other people do not agree that the minimum number of primitives is the correct goal for minimal complexity.

In all such cases the single primitive is more complex than the primitives belonging to an equivalent set of primitives with more members and it is equivalent to a composition of all the simpler primitives, done in such a way that it is possible to obtain any of the simpler primitives by composing the single primitive with itself.

In my opinion, it is nice to know this nice trick about reducing a set of independent primitives to only one, but this almost never has any practical application.

When reasoning about that domain, using the single primitive does not make it any easier, because it glues together independent concepts which are easier to handle separate and not in a forced combination. For example, when thinking about Boolean function, it is easier to think about AND-OR stages instead of NAND-NAND, or about OR-AND stages instead of NOR-NOR. So the knowledge about being able to use just NAND does not help you.

If we are talking about implementation, there also the single primitive does not help you, because you must actually implement it from more simpler primitives.

For example the main blocks for TTL logic are minimum circuits with diodes (AND), parallel transistors (OR) and transistor inverters (NOT), while the main blocks for CMOS logic are parallel transistors, series transistors, transistor inverters and transmission gates.

You do not design just a NAND and then make the other cells from it, but you must design at least a NAND, a NOR, a XOR and a multiplexer, then make other functions from these blocks. Knowing that you might use just a NAND does not help you, because any implementation that uses only NAND is less efficient than those based on multiple simpler primitives.

An example from another domain, you can use just a WHILE-DO to implement both an IF-THEN and a REPEAT-UNTIL, so you can claim that it is enough to have that primitive.

Nevertheless, if you look at the implementation, both IF-THEN and REPEAT-UNTIL are made with a single conditional jump. On the other hand, you must use 2 conditional jumps for a WHILE-DO, there is now way to implement it otherwise than as a composition of IF-THEN with REPEAT-UNTIL. If you implement the simpler primitives with WHILE-DO, you obtain a more inefficient implementation.

In conclusion, while it is interesting to know that you can reduce most or all sets of primitives to only one complex primitive, both in theory and in practice the sets of multiple simple primitives are more useful.


"In my opinion, it is nice to know this nice trick about reducing a set of independent primitives to only one, but this almost never has any practical application."

Also, NAND/NOR aren't "complex primitive", they're literally just primitives.

You've never used any modern computer? I consider modern computers pretty useful; considering this "trick" backs everything we do with modern computers, I consider it pretty fundamental...


That makes the set {AND, XOR} a cover, yes, but since it is larger than {NAND} (or {NOR}) it is not a minimum cover.


In the set-containment lattice, neither {AND, XOR} nor {NAND} is "larger than" the other, so they are both minimum covers.


Agh, yes, you're absolutely right. Been too long :(


XOR and AND is a falsehood-preserving set of gates. Whatever acyclic circuit you make out of them will always produce 0 as its output if all its inputs are 0.

It's a minimal universal set if you have access to constants (specifically, constant 1), but the other sets you mention do not require access to constants for universality. AND-NOT (BIC, set subtraction, logical abjunction, &^ in Golang) is a single binary gate that is minimal and universal in the same way (that is, it's falsehood-preserving, but universal if you have a constant 1), and it has the distinction of being the only universal gate available as a bitwise operation in many CPU instruction sets.

Here's a minimal construction of all 16 primitive binary combinations using XOR and AND:

    r[ 0] = x (truth table 0011, cost 0) = x
    r[ 1] = y (truth table 0101, cost 0) = y
    r[ 2] = 0 (truth table 0000, cost 1) = 0
    r[ 3] = -1 (truth table 1111, cost 1) = -1
    r[ 4] = r[0] ^ r[1] (truth table 0110, cost 1) = x ^ y
    r[ 5] = r[0] ^ r[3] (truth table 1100, cost 2) = x ^ -1
    r[ 6] = r[1] ^ r[3] (truth table 1010, cost 2) = y ^ -1
    r[ 7] = r[0] & r[1] (truth table 0001, cost 1) = x & y
    r[ 8] = r[0] & r[4] (truth table 0010, cost 2) = x & (x ^ y)
    r[ 9] = r[1] & r[4] (truth table 0100, cost 2) = y & (x ^ y)
    r[10] = r[5] & r[6] (truth table 1000, cost 4) = (x ^ -1) & (y ^ -1)
    r[11] = r[0] ^ r[6] (truth table 1001, cost 3) = x ^ (y ^ -1)
    r[12] = r[0] ^ r[9] (truth table 0111, cost 3) = x ^ (y & (x ^ y))
    r[13] = r[3] ^ r[9] (truth table 1011, cost 4) = -1 ^ (y & (x ^ y))
    r[14] = r[3] ^ r[8] (truth table 1101, cost 4) = -1 ^ (x & (x ^ y))
    r[15] = r[3] ^ r[7] (truth table 1110, cost 3) = -1 ^ (x & y)
And here it is with just abjunction:

    r[ 0] = x (truth table 0011, cost 0) = x
    r[ 1] = y (truth table 0101, cost 0) = y
    r[ 2] = 0 (truth table 0000, cost 1) = 0
    r[ 3] = -1 (truth table 1111, cost 1) = -1
    r[ 4] = r[0] &^ r[1] (truth table 0010, cost 1) = x &^ y
    r[ 5] = r[1] &^ r[0] (truth table 0100, cost 1) = y &^ x
    r[ 6] = r[3] &^ r[0] (truth table 1100, cost 2) = -1 &^ x
    r[ 7] = r[3] &^ r[1] (truth table 1010, cost 2) = -1 &^ y
    r[ 8] = r[0] &^ r[4] (truth table 0001, cost 2) = x &^ (x &^ y)
    r[ 9] = r[3] &^ r[4] (truth table 1101, cost 3) = -1 &^ (x &^ y)
    r[10] = r[3] &^ r[5] (truth table 1011, cost 3) = -1 &^ (y &^ x)
    r[11] = r[6] &^ r[1] (truth table 1000, cost 3) = (-1 &^ x) &^ y
    r[12] = r[3] &^ r[8] (truth table 1110, cost 4) = -1 &^ (x &^ (x &^ y))
    r[13] = r[3] &^ r[11] (truth table 0111, cost 4) = -1 &^ ((-1 &^ x) &^ y)
    r[14] = r[9] &^ r[5] (truth table 1001, cost 5) = (-1 &^ (x &^ y)) &^ (y &^ x)
    r[15] = r[3] &^ r[14] (truth table 0110, cost 6) = -1 &^ ((-1 &^ (x &^ y)) &^ (y &^ x))
You might intuitively suppose that &^ is "more efficient" or "more expressive" than NAND or NOR, since x &^ y gives you different results from y &^ x, so you would think that with the same number of gates, you would have more usefully different possibilities, and so some circuits would require fewer gates using abjunction than using NAND. However, I haven't found a natural family of circuits for which that is the case. Some circuits are simpler with abjunction (abjunction itself, for example, is only a single gate with abjunction), while other circuits are simpler with NAND, but neither one seems to have the kind of clear 2ⁿ advantage you'd expect from this asymmetry. Here's NAND's corresponding table:

    r[ 0] = x (truth table 0011, cost 0) = x
    r[ 1] = y (truth table 0101, cost 0) = y
    r[ 2] = r[0] &̄ r[0] (truth table 1100, cost 1) = x &̄ x
    r[ 3] = r[0] &̄ r[1] (truth table 1110, cost 1) = x &̄ y
    r[ 4] = r[1] &̄ r[1] (truth table 1010, cost 1) = y &̄ y
    r[ 5] = r[0] &̄ r[2] (truth table 1111, cost 2) = x &̄ (x &̄ x)
    r[ 6] = r[0] &̄ r[3] (truth table 1101, cost 2) = x &̄ (x &̄ y)
    r[ 7] = r[1] &̄ r[2] (truth table 1011, cost 2) = y &̄ (x &̄ x)
    r[ 8] = r[2] &̄ r[4] (truth table 0111, cost 3) = (x &̄ x) &̄ (y &̄ y)
    r[ 9] = r[3] &̄ r[3] (truth table 0001, cost 2) = a &̄ a where a = x &̄ y
    r[10] = r[3] &̄ r[8] (truth table 1001, cost 5) = (x &̄ y) &̄ ((x &̄ x) &̄ (y &̄ y))
    r[11] = r[5] &̄ r[5] (truth table 0000, cost 3) = a &̄ a where a = x &̄ b and b = x &̄ x
    r[12] = r[6] &̄ r[6] (truth table 0010, cost 3) = a &̄ a where a = x &̄ b and b = x &̄ y
    r[13] = r[7] &̄ r[7] (truth table 0100, cost 3) = a &̄ a where a = y &̄ b and b = x &̄ x
    r[14] = r[8] &̄ r[8] (truth table 1000, cost 4) = a &̄ a where a = c &̄ b and b = y &̄ y and c = x &̄ x
    r[15] = r[6] &̄ r[7] (truth table 0110, cost 5) = (x &̄ (x &̄ y)) &̄ (y &̄ (x &̄ x))
The searching is done with http://canonical.org/~kragen/sw/dev3/abjsearch.py.


Xors are slow and expensive . That’s why all you see is nand and nor. This article doesn’t really make much sense. Nobody designs at this level except for understanding and abstraction.


In most kinds of logic circuit families, a XOR is slightly slower and more expensive than a NAND or a NOR, but it has about the same cost as an AND or an OR, so it does not seem appropriate to call it slow and expensive.

Even if in theory you can make any logic gate from a combination of NAND or NOR gates, the XOR gate is almost never done so.

In most logic circuit families, including in the most popular families, i.e. TTL and CMOS, there are special circuits for implementing XOR, which are better than a NAND or NOR combination.

Also, in most logic circuit families, including in the most popular families, i.e. TTL and CMOS, even if you can implement NAND with NOR and NOR with NAND, this is not done so, but there are distinct circuits for NAND and NOR.

In conclusion, in most logic circuit families you do not implement everything with a single kind of universal gate, but you have 3 kinds of basic blocks, NAND, NOR and XOR.

In certain logic families you have extra basic blocks, which can be implemented in a better way than the equivalent schematic made of universal gates.

Moreover, even if TTL circuits are usually presented as being composed of NAND gates, it is more useful to view the NAND TTL or DTL gate as an AND gate (a minimum-computing circuit made with diodes) followed by an inverter made with a transistor (which can be expanded to a NOR gate by adding parallel transistors).

So the real basic blocks of TTL circuits are AND, NOR and XOR (the NOR being not the same as the NOR from the NAND+NOR+XOR description, but a sub-circuit of that), but these simpler AND and NOR blocks cannot be used by themselves, because the AND does not have TTL output levels and the NOR does not have TTL input levels, so you always must have an AND (possibly reduced to a repeater) followed by a NOR (possibly reduced to an inverter), to make a complete TTL gate.

My point is that while it is useful to know that any gate can be implemented using only a single kind of universal gate, such implementations are not efficient and you must use a small number of basic blocks, not only one.

On the other hand, when you want to understand the theory of logic gates, basing them on one kind of universal gate is also no useful.

You can better understand logic operations when you see them as based on minimum (or the equivalent maximum) and on inversion (for values between 0 and 1 inversion is 1-x, for values between -1 and 1 inversion is -x). These 2 operations are also valid when applied to non-binary logic values. The physical implementation of these 2 operations can actually work with real-valued inputs and outputs.

An alternative view for the binary logic case is that mentioned by someone above, of having addition & multiplication modulo 2 as the basic operations.

Either the minimum/inversion view or the addition/multiplication view give much more insight than reasoning about NAND or NOR, which just combine the 2 basic operations in such a way that a cascade of the combinations can recover each of the basic operations, so they provide an alternative.


That isn't correct at all; the reason you see NAND/NOR is because they're all that's needed (they are universal). GP was incorrect.


NOR and NAND are symmetric in their arguments. The other four universal gates are anti-symmetric. In other words you have to remember and be careful which input is which. Why would anyone chose to deal with that extra complication when they are no more powerful than the two symmetric universal gates?


There's tons of universal logic gates outside of the 2 -> 1 domain.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: