In the video, it’s clear that the results were downloaded from the Neuralink website, no errors occurred, and the results are displayed correctly. Could you specify why you believe it’s Bogus?
Analyzing the data it becomes clear that the A/D used by Neuralink is defective, i.e. very poor accuracy. The A/D introduces a huge amount of distortion, which in practice manifests as noise.
Until this A/D linearity problem is fixed, there is no point pursuing compression schemes. The data is so badly mangled it makes it pretty near impossible to find patterns.
It's actually amazing that Neuralink can use this badly distorted data.
I imagine that fixing the A/D would improve their results dramatically -- lower latency and higher precision.
Why Neuralink has continued work with such an obvious hardware defect is a serious question. Do they actually analyze the A/D to make sure its working properly?
they're looking for a compressor that can do more than 200MB/s on a 10mW machine (that's including radio, so it has to run on a CPU clocked like original 8086) and yield 200x size improvement. speaking from the perspective of a data compression person, this is completely unrealistic. the best statistical models that i have on hand yield ~7x compression ratio after some tweaking, but they won't run under these constraints.
I thought 200x is too extreme as well. In compression literature, is there a way to estimate the upper limit on lossless compressibility of a given data set?
There is not, because there could always be some underlying tricky generator that you just haven't discovered, and discovering that pattern is basically equivalent to solving the halting problem. (See https://en.wikipedia.org/wiki/Kolmogorov_complexity#Uncomput...)
As a trivial example, if your dataset is one trillion binary digits of pi, it is essentially incompressible by any regular compressor, but you can fit a generator well under 1 kB.
The same, since lossy compression can never be worse than lossless compression. (Also, it is more complex since you have to define your loss somehow. These Neuralink samples seemingly come as .wav files, but you probably wouldn't want to encode them with MP3!)
The sample data compresses poorly, getting down to 4.5 bits per sample easily with very simple first-order difference encoding and an decent Huffman coder.
However, lets assume there is massive cross-correlation between the 1024 channels. For example, in the extreme they are all the same, meaning if we encode 1 channel we get the other 1023. That means a lower limit of 4.5/1024 = about 0.0045 bits per sample, or a compression rate of 2275. Viola!
If data patterns exist and can be found, then more complicated coding algorithms could achieve better compression, or tolerate more variations (i.e. less cross-correlation) between channels.
We may never know unless Neuralink releases a full data set, i.e. 1024 channels at 20KHz and 10 bits for 1 hour. That's a lot of data, but if they want serious analysis they should release serious data.
Finally, enforcing the requirement for lossless compression has no apparent reason. The end result -- correct data to control the cursor and so on -- is the key. Neuralink should allow challengers to submit DATA to a test engine that compares cursor output for noiseless data to results for the submitted data, and reports the match score, and maybe a graph or something. That sort of feedback might allow participants to create a satisfactory lossy compression scheme.
This reminds me a lot of the Hutter Prize[1]. Funnily enough, the Hutter Prize shifted my thinking 180 degrees towards intelligence ~= compression, because to truly compress information well you must understand its nuanced.
"aside from everything else, it seems like it's really, really late in the game to suddenly realize 'oh we need magical compression technology to make this work don't we'"
All recordings were successfully compressed. Original size (bytes): 146,800,526 Compressed size (bytes): 123,624 Compression ratio: 1187.47
The eval.sh script was downloaded, and the files were decode and encode without loss, as verified using the "diff" function.
What do you think? Is this true?
https://www.linkedin.com/pulse/neuralink-compression-challen... context: https://www.youtube.com/watch?v=X5hsQ6zbKIo