I'd also add that random combinations of flashing lights is horrible from a UX standpoint, anyone tried programming a universal remote that only communicates via flashing lights?
I'd say you can convey any data with just 1 bit - imagine Morse code but only with dots. Although, it would be tedious (if you want to do it by hand).
But if you think of it as series - then yes, that is a time.
Written in dot-dash form the length of the symbol represents time.
Viruses that are intended to stay hidden and undiscovered is a new thing, relatively speaking.
WARNING: this is super tangential and has basically nothing to do
with 1-bit UX design except perhaps in the sense that it gives
some glimpse at the flexibility of information theory regarding
time/space tradeoffs. Mostly, I wrote it up because I'm bored.
Still later, in a signal processing class in college, I finally learned why they were so proud of their 1-bit data converter. The converter in question was undoubtedly a delta-sigma DAC , and the reason to be proud of just a single bit is that such data converters are, by their nature, highly linear!
Let's look at the dual of the delta-sigma DAC, the delta-sigma ADC. Assume that I'm trying to somehow represent an analog signal (a continuously-valued function, continuous in time) with a 1-bit digital signal (a binary-valued function whose points fall at discrete time intervals). For any such representation, at a given point in time there will be some difference between the continuously-valued and the discrete-valued function. We'll call this error "quantization noise" (for reasons that are partially obvious already and which will be perhaps more obvious later).
If I choose the discrete time intervals to be close enough together (that is, if the sample rate is fast enough), I can try to counteract whatever error exists now by choosing the next binary output to include not only information about that point in time, but also about the error I've just introduced. High frequency information will get little or no benefit, but low-frequency information can be reproduced more faithfully in this fashion.
How do I do this? With a feedback loop. If I want to turn an analog signal into either a 1 or a 0, all I have to do is compare it against some threshold (say, halfway between min and max value). But now, instead of comparing the input against this threshold, I instead compare the integral of the input plus the quantization error I've introduced with all my past comparisons. I do this by feeding back every decision the comparator makes to the input, and integrating the difference between this decisions and the present input.
If we make some convenient assumptions about the quantization noise (for a white Gaussian input signal, the quantization noise is also white and Gaussian, so analyzing its spectrum becomes pretty easy), we can show that the effect of this feedback loop is to push almost all the quantization noise to very high frequencies. We can later reconstruct the original signal by filtering out all the quantization noise (with a few low-pass filters). You might object that non-white input signals surely won't result in white quantization noise, and you're right, but it turns out that even substantially tonal input signals can be processed in this fashion with at most a tweak or two to the underlying structure.
OK, so what of the claim that this system is highly linear? Obviously in the large-signal sense a comparator is highly nonlinear, but in the small-signal sense a comparator is perfectly linear: it can produce only two outputs, and any two points perfectly describe a line. By contrast, let's say that instead of doing a one-bit data conversion, I'd chosen to do a two-bit conversion, i.e., there are four possible points in the constellation. In that case, I have to be absolutely sure that the difference between each pair of sequential codes is precisely the same. If not, the result will be harmonic distortion in the output signal.
So that CD manufacturer did have reason to be so proud of their 1-bit interface after all: with only two states, harmonic distortion is (to first order) eliminated, and so the linearity of the conversion from the digital codes stored on the CD to the analog output is quite good.
Note that there is a bit of work to even get to the point where you have the appropriate data for a 1-bit DAC, since a CD stores audio as 16-bit PCM data with a 44.1 kHz sample rate. In fact, you can convert this to a 1-bit data stream at a much higher bitrate through a process called interpolation, and further one way of implementing such an interpolator is by building a fully digital delta-sigma loop!
One last point of interest: SACD skips over the PCM data entirely, storing a 1-bit delta-sigma modulated stream directly on the disc with a sample rate of 2.8224 MHz. At this sample rate, the audio bandwidth and resolution after filtering is substantially better than CD (in practical implementations, 105 dB dynamic range with 50 kHz audio bandwidth, versus 90 dB dynamic range and 20 kHz audio bandwidth for CDs). Of course, questions regarding the utility of this additional performance remain hotly debated.
Another interesting UX exercise would be 1 bit input, 1 bit output system.