

Can we compress random data - TheAuditor
http://www.blaisemcrowly.in/technology/research/information-technology/2015/02/09/Can-we-compress-random-data.html

======
gus_massa
This can be easily rebutted by the counting argument:
[http://www.faqs.org/faqs/compression-
faq/part1/section-8.htm...](http://www.faqs.org/faqs/compression-
faq/part1/section-8.html)

~~~
TheAuditor
There is a minimum size required size for the file (cant be compressed beyond
a few bytes - even if repeatedly applied) and the argument is only valid when
we assume the files can be compressed to zero bytes.

no claim is made to the effect saying "Can be compressed indefinitely"

As I have told, the efficiency of it being able to compress over and over and
over again may have chance for data corruption unless better checks and
methods are added. (which will again increase the minimum required file size.
but you can still compress it)

~~~
gus_massa
Just for simplicity: Let's assume that I have a file of 1073741825 bytes =
1GiB+1B. And I want to put it inside my exactly 1073741824 bytes = 1GiB
pendrive. (Just assume that the OS filesystem doesn't waste any space.)

Are you sure that this algorithm will compress my 1073741825 bytes file to a
1073741824 bytes or less file?

~~~
TheAuditor
Yes. That is the only claim I have made. You can replace the first three bytes
with two bytes using this method.

~~~
DaveK23
I've come up with a better compression scheme than yours. My scheme is exactly
the same as yours, but you replace the first three bytes with one byte instead
of two. Doesn't matter what byte you replace them with, because - just like
your scheme - you can't get the original data back by decompressing it anyway.

~~~
TheAuditor
you can get the data back. do read the article and paper published.

------
DaveK23
The answer turns out to be 'no'.

There, saved you a click.

