Both are systems of symbols meant to encode 4 bits of information. In both cases, pointy bits in the corners of the symbols represent 1 or true, while round bits are 0 or false.
A difference is that in the case of bibi-binary the intent is to encode numbers, while the intent of the logic alphabet is to encode truth tables of binary logic operators. But fundamentally they encode the same information.
The other difference is that the inventor of bibi-binary was a comedian who obviously did it as a tongue-in-cheek novelty, while the logic alphabet's creator appears to have been quite convinced of the importance of his invention...
> A difference is that in the case of bibi-binary the intent is to encode numbers
bibi-binary is intended to encode raw binary data, not specifically numbers (at least not beyond 15). We already have a script and words which work mostly fine for numbers after all.
> The other difference is that the inventor of bibi-binary was a comedian who obviously did it as a tongue-in-cheek novelty
He was also a mathematician — his studies stopped because of world war II forced labour (from which he escaped) and he did patent bibi-binary, it was not just a joke.
And calling him a comedian is a bit reductive, comedianship was his outlet for a passion for wordplay, something he first tried as a songwriter (and poet), but he could find no singer, so ultimately ended up doing it himself.
In the same order of idea, Proquints are identifiers that are readable and pronounceable. In "Pronounceable Identifiers", Daniel S. Wilkerson proposes encoding a 16-bit string as a pronounceable quintuplets of alternating consonants and vowels as follows. Four-bits for consonants, and two-bits for vowels:
Too many similarly pronounced consonants. A variant that uses only two or three bits for the consonants would be interesting. Maybe some 5-bit subset of the Japanese Gojūon.
That first example's b-b is an immediate blatant problem, especially since h is a valid terminal. With unfamiliar words, there will also be problems with j/g/k. b/v/f, t/d, etc.
The one thing this does better than bibi is that it avoids the "E" which confuses English speakers.
It's a fun idea, but I'm confused as to why the inventor decided that decoding would rely on reading the bits in order of top-to-bottom, left-to-right. To make it intuitive to readers of French, you'd expect the order to be left-to-right, top to bottom. To make it intuitive to mathematicians, you might expect it to start in the upper right and proceed counterclockwise, as per the numbered quadrants of a Euclidean plane. The 16 symbols themselves will remain the same regardless of the order in which you visit the bits, but the sequential position of each symbol will change.
I agree. I also think that the alphabet leans too far in the direction of making the symbols distinct, and not enough in the direction of making them transparently interpret-able as binary - which is the unique aspect of a special binary notation like this and should be emphasized.
The bibi-symbols are quite bad for those of us with dyslexia, because mirror variants are so common. Yours are actually worse for that; every rotation of every symbol is present! Serifs could help.
Yeah, that makes sense. Given your account, it sounds like the 2D paradigm of the bibi-symbols (which I'm following) is fundamentally bad for interpretability with dyslexia, because it results in a lot of symmetries.
Even when considering the population of people without dyslexia, I can't think of examples of vernacular human alphabets that have many pairs of distinct symbols that are identical under some symmetry. So maybe symmetry is an alphabet design antipattern.
I was confused that some of the combinations of single-bit shapes don’t match the shapes of the corresponding two-bit combinations. For example, the shapes for 0001 (╮) and 0010 (╯) combined result in the shape for 1100 (⊃), not in the shape for 0011 (⊂).
Of course, if that shape-combining logic was followed consistently, the circle would have to be 1111, not 0000.
The bibi binary notation encodes shift of bits very interesting ways. Take 1, 2, or 3 (only some of the last two bits set) and shift by two bits and you get a left-right mirror image; take 4, 5, shift by one and you get upside-down mirror. I think the notation is awesome.
Yes, and another reading on the pun not explained in the page, on top of 'bibi' meaning 'me', is that 'bibine' (present in the prononciation of 'bibinaire') is French slang for alcohol.
This part in the introduction says it doesn't have a citation, does anyone have more information?
I'm very curious to learn more:
"It found some use in a variety of unforeseen applications: stochastic poetry, stochastic art, colour classification, aleatory music, architectural symbolism, etc.[citation needed]"
That section on negative numbers seems incomplete. It doesn't indicate how you write negative numbers instead talks about how normal binary computers use twos complement.
Edit: I think I was assuming you'd only use 4 bit or 8 bit values with this notation.
As far as I can see the section on negative numbers was completely made up, or at least it is not sourced in the system's patent. The encoding was mostly defined as a way to transmit (and write down) binary data in nibbles (groups of 4 bits), it does not really concern itself with what that data encodes.
The one thing it does support which the wiki does not note, is a run-length encoding: if a syllable ends in R, then the segment until the next R is interpreted as a positive integer and a repetition count so HORBAR is 05 (5 repetitions of 0), and DIRHAHOR is 1516 (16 repetitions of a fully set nibble).
And of course I forgot about HN's shitty markup and didn't notice before the edition timer ran out.
HORBAR is repeat(0x0, count=5)
DIRHAHOR is repeat(0xF, count=16)
Also in script form this is written as a strange parenthetical. Strange because the repeated nibble gets an open paren "stuck" to it, like a tail. So repeated nibbles don't look much like their base form.
Indeed the section on negative numbers is bullshit. One's or two's complement representation only works with fixed-length numbers. Arbitrary-length numbers would need an infinite number of leading 1 digits.
Doesn’t feel entirely fair to call that base-10000. Practically, it’s just base-10 with an idiosyncratic and mandatory ligation scheme involving mirroring, and only defined for 0001–9999 and not 0000 for some reason (though it’s obvious how zero should be represented).
The patent explains the reasoning, though I can't say I understand all of it. This is all in the french phonology
- open vowels for even, closed for odd (no idea why)
- back vowel for the second bit being unset, front for set (according to the wiki e is ə and central and mid rather than back and open but...), this kind-of matches the reasoning of the consonants positionality where the front sets the bit.
- H is the first consonant because it's mostly silent
- The patent justifies B, K, and D as labial, guttural, and dental. I guess physically indicating the bits set, with the low bit being the front of the mouth: labial is the low bit, guttural is the high bit, and dental being inbetween is both.
You can see the same physicality in the script, where "sharp edges" point to set bits when the nibble is written in pairs of two, big endian, columnar (so each block is read top to bottom, left to right, rather that left to right top to bottom).
Bobby Lapointe, the creator of the system, was a comedic singer who did a lot of wordplay, so building the system off of phonetics is entirely in line.
Even did some english, though I expect the standard is not very high for english speakers since it was targeted at french audiences: https://youtu.be/FnFnms--ifk
The Logic Alphabet (linked to above at http://www.logic-alphabet.net/images/flipstick_2347_2.jpg), which has a similar concept, seems to have better support: it can represent twelve of the symbols using Latin letters (o, p, b, q, d, c, u, s, z, n, h, and x, although z, u and n seem a bit forced) and the others seem to have decent Unicode equivalents (ɔ or ⊃ (superset), μ (Greek letter mu), ɥ or ч, and maybe ʎ).
http://www.logic-alphabet.net/images/flipstick_2347_2.jpg
Both are systems of symbols meant to encode 4 bits of information. In both cases, pointy bits in the corners of the symbols represent 1 or true, while round bits are 0 or false.
A difference is that in the case of bibi-binary the intent is to encode numbers, while the intent of the logic alphabet is to encode truth tables of binary logic operators. But fundamentally they encode the same information.
The other difference is that the inventor of bibi-binary was a comedian who obviously did it as a tongue-in-cheek novelty, while the logic alphabet's creator appears to have been quite convinced of the importance of his invention...