Hacker News new | past | comments | ask | show | jobs | submit login
Neuralink Compression Challenge (neuralink.com)
18 points by crakenzak 5 months ago | hide | past | favorite | 27 comments



Apparently, someone solved it and achieved an 1187:1 compression ratio. These are the results:

All recordings were successfully compressed. Original size (bytes): 146,800,526 Compressed size (bytes): 123,624 Compression ratio: 1187.47

The eval.sh script was downloaded, and the files were decode and encode without loss, as verified using the "diff" function.

What do you think? Is this true?

https://www.linkedin.com/pulse/neuralink-compression-challen... context: https://www.youtube.com/watch?v=X5hsQ6zbKIo


Bogus. But a nice spoof.


In the video, it’s clear that the results were downloaded from the Neuralink website, no errors occurred, and the results are displayed correctly. Could you specify why you believe it’s Bogus?


Anyone can fake a website. Can you prove that they show the real Neuralink website?


Agreed. Just read their website marketing stuff and your scam bells should be going off.


Analyzing the data it becomes clear that the A/D used by Neuralink is defective, i.e. very poor accuracy. The A/D introduces a huge amount of distortion, which in practice manifests as noise.

Until this A/D linearity problem is fixed, there is no point pursuing compression schemes. The data is so badly mangled it makes it pretty near impossible to find patterns.


It's actually amazing that Neuralink can use this badly distorted data. I imagine that fixing the A/D would improve their results dramatically -- lower latency and higher precision. Why Neuralink has continued work with such an obvious hardware defect is a serious question. Do they actually analyze the A/D to make sure its working properly?


they're looking for a compressor that can do more than 200MB/s on a 10mW machine (that's including radio, so it has to run on a CPU clocked like original 8086) and yield 200x size improvement. speaking from the perspective of a data compression person, this is completely unrealistic. the best statistical models that i have on hand yield ~7x compression ratio after some tweaking, but they won't run under these constraints.


I thought 200x is too extreme as well. In compression literature, is there a way to estimate the upper limit on lossless compressibility of a given data set?


There is not, because there could always be some underlying tricky generator that you just haven't discovered, and discovering that pattern is basically equivalent to solving the halting problem. (See https://en.wikipedia.org/wiki/Kolmogorov_complexity#Uncomput...)

As a trivial example, if your dataset is one trillion binary digits of pi, it is essentially incompressible by any regular compressor, but you can fit a generator well under 1 kB.


Cool. Thanks. How about lossy compression?


The same, since lossy compression can never be worse than lossless compression. (Also, it is more complex since you have to define your loss somehow. These Neuralink samples seemingly come as .wav files, but you probably wouldn't want to encode them with MP3!)


So, they're asking skilled engineers to do work for them for free, and just email it in?

Why didn't every other company think of this?


>So, they're asking skilled engineers to do work for them for free, and just email it in?

Yup:

"Submit with source code and build script."

But hey, the reward is a job. Maybe.

I mean, not everyone can be privileged enough to experience Ultra Hardcore™ toxic work culture.


200X is possible.

The sample data compresses poorly, getting down to 4.5 bits per sample easily with very simple first-order difference encoding and an decent Huffman coder.

However, lets assume there is massive cross-correlation between the 1024 channels. For example, in the extreme they are all the same, meaning if we encode 1 channel we get the other 1023. That means a lower limit of 4.5/1024 = about 0.0045 bits per sample, or a compression rate of 2275. Viola!

If data patterns exist and can be found, then more complicated coding algorithms could achieve better compression, or tolerate more variations (i.e. less cross-correlation) between channels.

We may never know unless Neuralink releases a full data set, i.e. 1024 channels at 20KHz and 10 bits for 1 hour. That's a lot of data, but if they want serious analysis they should release serious data.

Finally, enforcing the requirement for lossless compression has no apparent reason. The end result -- correct data to control the cursor and so on -- is the key. Neuralink should allow challengers to submit DATA to a test engine that compares cursor output for noiseless data to results for the submitted data, and reports the match score, and maybe a graph or something. That sort of feedback might allow participants to create a satisfactory lossy compression scheme.


Sorry, corrected an error.

It's 2275X

That's the compression ratio for complete cross correlation. It's (10 bits uncompressed / 4.5 bits compressed on 1 channel) * 1024 channels


This reminds me a lot of the Hutter Prize[1]. Funnily enough, the Hutter Prize shifted my thinking 180 degrees towards intelligence ~= compression, because to truly compress information well you must understand its nuanced.

[1]http://prize.hutter1.net/


And in exchange for solving their problem for them, you get... ???

I'm all for challenges, but it is fairly standard to have prizes.


Probably the Turing award for discovering a breakthrough compression scheme.


> apparently the best submissions get fast tracked to an onsite if you want a job


200x compression on this dataset is mathematically impossible. The noise on the amplifier and digitizer limit the max compression to 5.3x.

Here’s why: https://x.com/raffi_hotter/status/1795910298936705098



"aside from everything else, it seems like it's really, really late in the game to suddenly realize 'oh we need magical compression technology to make this work don't we'"

https://x.com/JohnSmi48253239/status/1794328213923188949?t=_...


< 10mW, including radio

Does it mean radio is using portion of this 10mW? If so, how much?


why should it be lossless when presumably there is a lot of noise you don't really need to preserve


exactly, when you look at the data it looks entirely like noise without any signal, why transmit that in the first place. And why losslessly.


That's the thing. First principles thinking would say to look at that 200Mb/s and figure out what you can lose, before compressing.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: