Hacker News new | past | comments | ask | show | jobs | submit login
Don't Use RAR (miletic.net)
209 points by vedranm on Feb 25, 2022 | hide | past | favorite | 209 comments



RAR is a full-blown backup format* while 7zip is "just" an archive format. So comparisons are somewhat mood.

But even I as a RAR fan and license holder have to agree. For the average joe, sending email attachments, zip or 7z is enough.

* It can safe all three file timestamps (From nanoseconds precision (with NTFS), down to only 2 seconds precision (if you want to shave of a view bytes)), ACLs, ADS, Hard and Soft Links, Unix devices. It can add restore data to protect from bit rot. It deduplicates identical files. Variable length part-files. Skips already compressed files. etc. etc. (+ the GUI supports settings-profiles, which i really miss in 7zip.)


RAR is the only archive/compression format that can include a PAR2 like recovery record. You can even configure what % of the size to dedicate for that.

That has been a great killer feature for me over the years.


Confirming this, and confirming that the recovery record (that I had set to a hefty 10%, I think, because the data was important to me) did let me recover archives that suffered physical media damage, which would have been irrecoverable had I used something else. I wish 7zip had this feature.

This was one of the most satisfying progress bars: from "your data is dead" to a few minutes later "Successfully recovered full data from recovery record" :)))))

I anticipate passerbys might say "but you can do the same with TAR + xyz + abc". To this I’ll reply:

1. The nice thing about RAR is that this recovery feature has been built-in and extremely easy to use, since the 90s.

2. Usability nit aside, I'd love to hear about alternatives.


> let me recover archives that suffered physical media damage

> Usability nit aside, I'd love to hear about alternatives.

Make two or more copies on different media. Sure, RAR or another archiver can save you from partial media damage but nothing saves you from total media damage other than having completely separate backups.


Two copies is 200%, the redundancy records are normally below 110%.

It's like RAID1 vs RAID5, only without the speed bonus of RAID1.


Yeah. They have a point, though, more copies is good too.

To me it's a case of “why not both?”, and I remain interested in software free RAR alternatives with a RecoveryRecord-ish feature.


Par2 is always an option, and RAR implements Reed-Solomon error correcting codes and there are many recent implementations of those: https://github.com/klauspost/reedsolomon.


I'm sure there is a public algorithm / implementation. My question here is: is there an existing archive format (like .7z) that integrates it to the archive format and knows to update the error-correction data on archive update? Maybe not in a way that is as user-friendly as RAR, but "halfway there".

Or are you suggesting all my .7z files should come with a sibling .par2, that I must remember to regenerate after every change of the .7z ? I know I was asking here about "Usability-aside alternatives", but that alternative seems quite inconvenient and below the bar I was ready to accept when asking for less-user-friendly stuff.


A by-the-way note for passerbys on the subject of PAR recovery records:

- As usually, the Arch wiki has an excellent page on the topic: https://wiki.archlinux.org/title/Parchive

- Hey, par3 is in the works: https://github.com/Parchive/par3cmdline/issues/1

- I created [Feature request / par2create] Option to regenerate par2 files if input file changed : https://github.com/Parchive/par2cmdline/issues/170


> Reed-Solomon error correcting codes

Had to double check that. I would have sworn those codes were called Solomon-Reed, not Reed-Solomon. Guess I have my little Mandela Effect now!


I think for the common use case of distributing giga bytes of data in a reasonably efficient way, 7zip is fine (e.g. simheaven.com uses that for huge scenery downloads). Where things get tricky is splitting the resulting archive in multiple files. This used to be common with rar and I think 7zip also supports this. That is convenient when downloading over a slow/unstable connection (or if you don't know how to resume an aborted download with wget or ftp). Less of an issue these days but of course browsers don't support this. With zip files this also used to be a thing that some tools supported. I'm old enough to remember PKZip being a new thing :-).

For as an actual backup format, tar is a common alternative. I'd probably use tar with bzip2, 7zip, or even gzip compression. It supports a few more compression algorithms of course. The main downside is that tar files are less common on windows and the whole double file extension thing confuses people with file managers that hide them (i.e. most users on windows or mac). Of course most of the archive/zip ui tools out there would support tar files anyway. Where it gets weird of course is with things like file permissions and other linux specific stuff like user and group ids. It's the reason zip is so popular, it doesn't support that at all. Tar is a bit over engineered for the simple use cases.


No, the major problem of tar is 'it is not indexable'. In any case, browse a tar means you need to scan the whole achieve to locate all files and generate the entry list. And of course it get slower when the files are a lot or archive is big. This is a format even worse than zip.(Zip do have a central record section)

And it also suffer the same problem from zip: the charset is undefined. Because they are older then the time utf-8 being the universal standard. Which means it mess up when two systems are in different charset and software is not smart enough to guess it.

These problems are all addressed in later format(rar/7z). Newer archive format most enforcing utf-8 charset. So you won't have a problem that file name suddenly turn into garbage for no reason.

It's probably the worst format you can chose if you need to distribute a file instead using it as a pack/extract on local only format(not even browse).


Technically speaking, TAR is indexable -- the index is distributed through the archive, at the head of each file. So long as the TAR is uncompressed, you can skip from one header to the next by reading the size of each file. It ends up looking like a bunch of random I/O, but it's manageable. (The headers are even sector-aligned!)

The problem arises when the TAR is compressed with a stream compressor (as most are). Since stream compression formats generally aren't seekable, there's no way to read each header without uncompressing the rest of the file in the process.

> And it also suffer the same problem from zip: the charset is undefined. Newer archive format most enforcing utf-8 charset.

On the flip side, this means that those archive formats can't archive the contents of a filesystem which contains inconsistently encoded (or flat-out nonsense) filenames.

So long as filesystems haven't "solved" the encoding problem, I wouldn't fault an archive format for behaving similarly.


> Technically speaking, TAR is indexable -- the index is distributed through the archive, at the head of each file.

Actually, zip works in the same way as tar. So the central dict is locate at the end of file because it can only be decided after all works finished.

And zip actually struct the first part of file with file info before every block in layout similar to tar, so stream decoding is possible. (Although it IS a violation of specification to decode file based on per file info instead of central record in zip standard)

> On the flip side, this means that those archive formats can't archive the contents of a filesystem which contains inconsistently encoded (or flat-out nonsense) filenames.

It is a good thing IMO because you can't contribute to the problem more now (at the price not able to backup already screwed up disks. Well..., at least old format(tar) still works on old disk)


The reason I'm a RAR fan is because TAR failed me multiple times! (The first one was ~21 years ago. I lost most of my home.tar.gz-Backup. Because tar has no index, if something bad comes along, everything after that can't be read.) 17 years ago, I took a deep dive into rar and replaced my tar.gz backup process with it. I had a cronjob that checked the tar.gz-Backups from the past week and there was a 50% change of "Unexpected EOF in archive". There where weeks, when all seven archives had and error! Disk, NFS, USB-Stick didn't mater. Also I wanted encryption. Which means another pipe. I have also seen bit flips on USB-Sticks. Where a copied jpg had reversed colours in the middle. So I wanted redundancy too. RAR has everything included, and I felt it had a low bus factor.

Time moves on. 7zip feature creeps towards rar. Most if not all consumer OSs can open and create .zip out of the box. So, who cares ;-) Even I use Borg for my backups these days, since I don't have to care about a bus factor any more. RAR is still cool and my 17 year old licence is still valid.


It’s gzip that caused your issue, not tar. The lack of index in tar is actually a strength for surviving corruption, as there are no critical areas, ie you can lose any parts of a tar archive and recover useful data from the rest

Tar.gz and tar.xz should never have been made a thing or become so popular, the only safe options are tar, tar.lz and to an extent tar.bz2


> From nanoseconds precision (with NTFS)

NTFS does not have nanoseconds precision, only 100-nanoseconds aka FILETIME.


Unix users will just use TAR/PAX for that and then compress the package. Much better.


Yeah I use RAR daily and I have no interest in using a tool made by companies who exist to collect data and use it to serve you ads.

I’d rather pay $29 for a lifetime license than have any chance at giving more data about me to Google or Facebook.


I'm not sure what you're talking about.

The major tool promoted in the article is 7Zip which is not made by either Google or Facebook, costs $0, and doesn't contain ads or collect data.

The two tools which are made by Google or Facebook are command line utilities which also don't contain ads.


7zip is just one of three applications mentioned in the article and the article says it’s much slower and seems to more recommend offerings from Facebook and Google.

First off, 7-Zip achieves a better compression ratio, but it is much slower to compress than RAR. However, since 2013 Google's Brotli and since 2015 Facebook's Zstandard (Zstd) are two good options for file compression.

I didn’t say either of them contained ads. I said that both companies offering them exist to collect data and serve ads. I stand by that I have no interest in using any tool that they provide.

7zip is a fine application but as others have covered here it is not a full replacement for RAR.

So I’ll continue using RAR daily.


zstd and brotli are not products though, they’re just components that have been open sourced (BSD/LGPL2, and MIT respectively). The fact they’re from Facebook/Google is irrelevant here. You do not support or endorse Facebook/Google because you use zstd/brotli.


Google/Facebook aren't getting paid for these products. You are not supporting them by just using their compression tools.


The rar license prohibits writing a compatible compressor? What kind of nonsense is that? I clicked the license link and there is a scary looking anti-reverse-engineering clause, but 1) that sounds legally dubious given how the program is distributed; and 2) there is a FOSS decompressor, and studying that may be sufficient to write a compressor without examining the non-free official compressor.

The article is right that there's not much good reason to use rar these days. It is still popular in some communities though. So I've had occasion to run the decompressor, though not the compressor as far as I can remember.


From the winrar download page you can download the binaries directly without agreeing to any license, which means its clauses wouldn't apply.

It's already been established in the courts that merely using a piece of software does not imply acceptance of licensing terms; the parties must actively agree to it. That's why so many companies push in-your-face agreements that you must actively click "I agree" to.


Could I then download a zip of torvalds/linux from github and use it without abiding by the license as it’s never thrown in my face?


Yes. The GPL is not a license which dictates how you can use software, such as that you must not read it, or use ideas from it.

EULA restrictions like "no reverse engineering" are not rooted in copyright law, and borrowing ideas is in the realm of patents.

EULA's try to leverage copyright law in order to impose non-copyright restrictions, using the concept that if you violate any of the arbitrary non-copyright restrictions, the thereby violated license lapses, and that license is the document which allows you to have a copy of the software (not the fact that you paid).


I always found this aspect of 'copyright' very confusing. I still don't understand the legal basis here.


I don't fully understand it myself, because if you have a copy of the work (e.g. purchased copy of some proprietary software) the entity which distributed that to you was the copyright-holding purveyor. They did the copying, using their reserved right to do so.

No matter what yo udo with the copyrighted work, if you're not copying it, you're not infringing on copyright. The idea that your right to have a copy can lapse due to violating the EULA has holes in it, because copyright isn't about the right to have a copy, but about the right to produce and redistribute copies.

For instance, on a bit of a tangent here, if you steal a book out of someone's backpack, that is not copyright infringement, and cannot be. You didn't copy anything.

The proprietary EULA wants the law to believe that you're stealing if you continue to use the software after violating the agreement. But without connecting that to the concept of copyright infringement, the accusation has no basis, since you're just using what you paid for it.

Disassembling a binary executable to understand it is exactly the same as reading a book to understand it. Copyright is made for books and such.


You can do that anyway, but if you wanted to break the license by distributing modified copies without providing sources, you would fall foul of copyright law.


This is why people start putting license in a big comment at the top of source files. Also why when I write sites I require the create user call to include a "acceptTosAndLicense" or similar field set to true.


When you open the source the license is there.


Yes! Unlike other licenses, the GPL explicitly covers distribution, not use.


It would probably be possible to argue in court that your compressor isn't technically breaking any laws. But why go through the trouble of using formats which aren't meant to be used, when there are so many good alternatives out there which actually are intentionally freely available?


Clearly people who write licences want us to believe they are lawmakers. We need to clamp down on this and put them back in their place.


In some countries there's an exception in the copyright law if you're reverse engineering something to provide compatibility.


Also in many countries software / algorithm parents do no exist, so you could write your own rar utility.


Microsoft ASF/WMA also has a similar limitation, where it disallows writing an open source implementation if you use the official format specification.

Luckily it is too mostly irrelevant nowadays.


> Microsoft ASF/WMA also has a similar limitation,

Doesn't mean it would in court.


If you've got the resources to mount a credible legal defence against Microsoft


Microsoft ironically made its fortune by selling software that ran on IBM PC compatible x86 boxes, built around PC BIOS's that started from a clean room reimplementation of the reverse engineered IBM BIOS. IBM itself was not very significant in that market except at the very beginning. Clones for me, but not for thee.


I look back now at how weird it was going to the store looking for the IBM compatible computers amongst the Amiga, Apple's, Commodores and such.


Once you have fully "re-created the RAR algorithm" you would not need to license the official one anymore. So it's OK, just uninstall RAR from your computer at that point :)


Unar is a libre alternative.


I strongly prefer WinRAR (the software) because it does everything I am looking for in an archiving and compressing software.

- It is a great tool for archiving. Especially in Windows, it is often faster to compress a rar archive than copying several hundreds of files. This fills the lack of a `tar` tool in Windows space.

- It supports changing the compression level during the compression.

- Highly backwards compatible format. The newer rar format is opt-in.

- WinRAR supports various formats such as ISO, JAR, and a few typical zip-ish formats.

- Superior drag and drop support.

- Context-aware extraction. For example, if you double click a PNG, it only extracts that PNG file. For an exe, it extracts the whole thing.

- Splitting files, locked archives, AES protection, SFX, and other features are easily accessible.

I use 7-zip for zstd, and brotli cli when I particularly need those formats. For my personal archives, rar format and WinRAR never let me down.

Yes, I get that it's not an open source software and doesn't come with a permissive license, but neither do Windows OS itself, and many other software I use. There is an open source unrar, so I don't fear being locked out on my archived.

I happily paid the $29 even though I could absolutely use it without paying a dime, just like thousands of other users.


I avoid any proprietary data compression software because they tend to try to lock you in their own file formats. Not just WinRAR, I've seen such attempts at least twice just in my own country (South Korea). I don't want to take a risk that I can't (freely) open my own archive later; I'm aware the risk would be much smaller for WinRAR, but that's still unacceptable for me.


Most of these, if no all, is covered by 7-zip.


7z does not offer archive recovery. This has saved my archived data couple of times, which would be gone, if I was using 7z.


You can use an par2 file for that though. Yes, might be less convenient than having it built into the archive in some cases but on the other hand you can a) only download the .par2 file if you need it and b) use it to add error correction to files without putting them in an archive so you can access them directly and c) you can create the error correction information across multiple archives if you want.


This sounds sooo much like that "dropbox? Just use rsync" comment.

I don't think you'll get why having all in a simple to use solution is superior for several of us.


the dropbox joke is that nobody would use dropbox, which is obviously false in hindsight. but nobody is claiming that nobody uses rar, but instead that nobody should use rar. the same argument is valid of dropbox: https://en.wikipedia.org/wiki/Criticism_of_Dropbox.


I'm not familiar with compression very much, what's the difference between what 7z does versus "archive recovery"? Thanks


Some compression formats are actually designed for data archival and compression. Rar is one such format. One of the early use cases was data archival onto long term storage mediums such as tape. As such, it has built in parity which means you can recover partially corrupted archives. It can also do things like split the archive up over many smaller files. Again, because this sort of thing was needed for backing up to tape, floppy, etc in the late 90s/early 2ks.


> It can also do things like split the archive up over many smaller files.

Are there other archival formats on par with RAR without the licensing voodoo I see people mentioning?


You'd need to use a tool like par2 (https://en.wikipedia.org/wiki/Parchive) on top of a 7z file to get a recovery record.


Winrar save extra information to the compressed file. So in case your .rar file damage, it can be recovered.


>- WinRAR supports various formats such as ISO, JAR, and a few typical zip-ish formats.

Jar is just a ZIP. ISO's can be mounted under Unix.

> - Context-aware extraction. For example, if you double click a PNG, it only extracts that PNG file. For an exe, it extracts the whole thing.

Any software does that since... Amiga days?

>- Splitting files, locked archives, AES protection, SFX, and other features are easily accessible.

Splitting exists since forever.


This is conflating archive format with compression algorithm.

The very broad statement "7-Zip achieves a better compression ratio, but it is much slower to compress than RAR" should be demonstrated over all compression levels with a curve of compression-ratio vs time. I am skeptical RAR cannot be outperformed by 7zip in at least some situations.

The license is the strongest argument to not use RAR.


7-zip is single-threaded (by default; there are switches you can play with and -mt is not enough), which means, that you are waiting for your stuff to compress, while simultaneously most of your cores are idle.

I don't know what winrar uses, I don't use it; but this is my annoyance with 7zip.


I assume you mean command line? The 7zip UI appears to set threads to the number of CPUs by default for LZMA2, and hits 100% on all cores.

Zip format also defaults threads to number of CPUs, but only hit 30% CPU (over 16 threads) on some test data.


Yes, I mean command line; it is a long time that I used the gui, though I don't think it is updated often.

With command line, I arrived eventually to a set of switches to use, but the -mt option seems to allocate core per file. That means, that if the filesizes of the archived files are not somewhat distributed, it won't work very well. The extreme is, if you have one huge file and few small ones, the small ones will be compressed quickly and you will be still waiting for the huge one to finish with a single core.

And then there are apps that use liblzma underneath. `flatpak build-bundle` can take almost forever.


I don't think I've ever actually used the official build of 7zip on the command line. Only the older p7zip version on Linux.

The p7zip version by default uses all cores and hits 100% on all of them compressing a single file. Perhaps p7zip or my OS has a different set of default options?


> the -mt option seems to allocate core per file

this can't be true in solid archive mode, which should almost always be used (otherwise you might as well use zip). also:

  $ tee {1..10} <<< 1
  1
  $ seq 1 1000000 > 11
  $ time 7z a x.7z {1..11}; rm x.7z
  7-Zip (z) 21.07 (x64) : Copyright (c) 1999-2021 Igor Pavlov : 2021-12-26
   64-bit locale=en_US.UTF-8 Threads:12, ASM
  
  Scanning the drive:
  11 files, 78888917 bytes (76 MiB)
  
  Creating archive: x.7z
  
  Add new data to archive: 11 files, 78888917 bytes (76 MiB)
  
  Files read from disk: 11
  Archive size: 2680630 bytes (2618 KiB)
  Everything is Ok
  7z a x.7z {1..11}  19.89s user 0.09s system 205% cpu 9.733 total
  $ time 7z a -mmt=1 x.7z {1..11}; rm x.7z
  7-Zip (z) 21.07 (x64) : Copyright (c) 1999-2021 Igor Pavlov : 2021-12-26
   64-bit locale=en_US.UTF-8 Threads:12, ASM
  
  Scanning the drive:
  11 files, 78888917 bytes (76 MiB)
  
  Creating archive: x.7z
  
  Add new data to archive: 11 files, 78888917 bytes (76 MiB)
  
  Files read from disk: 11
  Archive size: 2779833 bytes (2715 KiB)
  Everything is Ok
  7z a -mmt=1 x.7z {1..11}  21.88s user 0.05s system 99% cpu 21.943 total


Seems that I will have to check what parameters to really use:

    $ time 7z a x.7z ubuntu-20.04.4-desktop-amd64.iso 
    
    7-Zip [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
    p7zip Version 16.02 (locale=en_US.UTF-8,Utf16=on,HugeFiles=on,64 bits,24 CPUs AMD Ryzen Threadripper 2920X 12-Core Processor  (800F82),ASM,AES-NI)
    
    Scanning the drive:
    1 file, 3379068928 bytes (3223 MiB)
    
    Creating archive: x.7z
    
    Items to compress: 1
    
                                         
    Files read from disk: 1
    Archive size: 3277419065 bytes (3126 MiB)
    Everything is Ok
    
    real 1m23,295s
    user 25m24,447s
    sys 0m27,991s
is quite a difference compared to:

    $ time 7z a -t7z -m0=lzma x.7z ubuntu-20.04.4-desktop-amd64.iso 
    
    7-Zip [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
    p7zip Version 16.02 (locale=en_US.UTF-8,Utf16=on,HugeFiles=on,64 bits,24 CPUs AMD Ryzen Threadripper 2920X 12-Core Processor  (800F82),ASM,AES-NI)
    
    Scanning the drive:
    1 file, 3379068928 bytes (3223 MiB)
    
    Creating archive: x.7z
    
    Items to compress: 1
    
                                         
    Files read from disk: 1
    Archive size: 3305891811 bytes (3153 MiB)
    Everything is Ok
    
    real 7m5,647s
    user 13m9,589s
    sys 0m6,757s


> The license is the strongest argument to not use RAR.

Why? Most of the daily tools that keep business world running are closed source. Excel. Or Photoshop. Or Windows/MacOS itself. But RAR license and not being open source is now pushing it too far? Really?


No, those are also bad - but harder to displace.


The same people that moan about some software being closed source will come back later crying because said software became a SaaS internet business with monthly fees...

There's nothing wrong with closed source for profit software. I'll never understand why people in this field push to discredit a honest way to earn money from the work they do...

It's as if Medics would cry because other medics are charging them for a medical visit. Dirty medics trying to profit from their knowledge and time!!


> It's as if Medics would cry because other medics are charging them for a medical visit. Dirty medics trying to profit from their knowledge and time!!

No, it's as if one medic told another medic "you're not allowed to use any knowledge you got from my visit to help your patients unless you pay me".


> The same people that moan about some software being closed source will come back later crying because said software became a SaaS internet business with monthly fees...

You're saying that in a "Gotcha!" tone, as if you've caught these people out in an inconsistency.

You haven't; there is no such inconsistency. Closed source can be bad even if SaaS is even worse.


I'll be the one to say it: because a large majority of programmers are liberals, democrats, and anti-capitalist. And they feel everyone else should be too.


Its okay to have some business spesific software be closed source, say Excell. But an archive is needed every time you download a collection of files from a government website, or make GDPR request for your data to google.

So by choosing RAR you would be forcing millions of people, maybe entire country to use that company's product and giving them power. Is there any good reason for that?


And what's the difference compared to forcing docx/xlsx or psd? The network effects affect every exchange, not just archives.


docx/xslx are open formats.

About psd, yes, that's proprietary. What government agency mandates its use for the public to interact with them? (genuinely asking, not being snarky)


> docx/xslx are open formats.

In theory, yes. In practice, you will see files in transitional schema in the wild and you need MS Office to work with them, unless they are not just a very basic document.

(I had a similar case in the past, the document contained forms, and it would not render properly even with MS Office for Mac available at the time).

> About psd, yes, that's proprietary. What government agency mandates its use for the public to interact with them? (genuinely asking, not being snarky)

Not aware of G2C, but in other scenarios, it happens.

In our country, public administration is not allowed to use rar either. Only zip, tar, gz, and tar.gz. Despite that, I did receive a rar once; also an Outlook .msg file. Rar was not really a problem, and .msg was solved by communicating with the other side (Mac Outlook cannot open them).


If a government forces it's people to use particular software, it has a duty to provide that software to the entire population. If a parliament decides to pay $X billion to Microsoft, for a lisence for all of UK, well, that sucks, but they can fight over it like they do with all other random spending they do.

If it's not willing to pay the cost, it is basically creating a monopoly for that firm, and I have to pay for it. The potential for lobbying and corruption is off the scale here.


I see where you come from, and in principle agree with you, but it is not how the real world works.

For an extreme example, see South Korea and their ActiveX problem.


Good luck trying to send a file compressed with Zstd or Brotli to people and have them know what to do with them. I’d certainly have to google to figure out how to open one of those. And I’m an engineer and aware that those formats exist.


Zstandard and Brotli are also not an archiving format, which the layperson normally expects to be a "compression" format, so the comparison is a bit off. But that doesn't really matter because there is 7-zip which is an archiving format AND compression format AND competitive to RAR AND open-source.


Yeah, you’d need to use something like *nix `tar` to do the archiving part. I think newer versions of tar already have support for Zstd.


> I think newer versions of tar already have support for Zstd.

You don't really need to have support for a particular compression fromat in the tar command since the tar format does not support compression at all and instead the output of tar is passed through a compressor such as gzip or xz that has its own encapsulation format. All the support in tar does is add switch to start that (de)compressor for you and, for GNU tar, to detect it based on the file extensions.


a tar.something is just a tar archive passed through a compression filter like zstd or gzip. Anyway, tar needs metadata extensions in order to allow random access; most tarballs you'll find don't. So you need to read 5TB of data to find the file at point 5TB in an archive of size 10TB. Which is why as a general purpose archiving format, tar needs to be applied to use cases where it works. It gets old really quick if you don't have random access and random access requirements, plus, remediated by SSDs these days for a bit, it's really time consuming.


But slower.


It depends on which time you measure. 7zip's LZMA2 is, like most LZ algorithms, asymmetric that it is quite fast to decompress no matter how much you spend on the compression. By comparison RAR uses PPM in the highest setting, which is more or less symmetric so the compression and decompression takes about the same amount of time. When the target ratio is roughly same and high enough, LZMA2 is indeed slower than (RAR's implementation of) PPM for the compression but much faster for the decompression.


There is no world in which LZMA2 decompression can be considered “quite fast.” It’s decompression speed bears no resemblance to that of other LZ algorithms like LZ4 (which is definitely fast).

https://www.opencpu.org/posts/brotli-benchmarks/

Distributions have stopped using xz/lzma in favor of slightly inferior (size-wise) alternatives due to the time and memory decompression requirements of lzma.


> It depends on which time you measure. 7zip's LZMA2 is, like most LZ algorithms, asymmetric that it is quite fast to decompress no matter how much you spend on the compression.

In fact, better compression with LZ formats often means faster decompression (at least compared to other settings of the same format) as there is less compressed data to pass through the expensive entropy coder (which itself has symmetric cost for encoding and decoding).


You won't be able to do that with RAR files either. The only archive/compression format that you can send anyone is ZIP, since both Windows and macOS can open them ootb.


Except that... non ascii character filename went wrong all the time. Because zip did not enforce a charset. A rar or 7z won't mess up your filename in any case in contrast to zip.


I wouldn't send a RAR to my grandparents, but if I had to send pictures for a school reunion I would expect everyone to either know how to open a RAR or to be able to figure it out. Worst case they will google rar and will find WinRAR as the first result.

For zstd or brotli I can't even name a GUI program able to open them outside a somewhat obscure 7zip fork.


If you sent me a RAR, I'd remind you that it's the 21st century, we all have really fast Internet these days compared to the 90s when RAR was released, and I really don't want to have to install a new piece of software just to open your photos.

(The best bit is that the official WinRAR Linux distribution is a .tar.gz)

Just use ZIP unless your classmates are working solely via tethered 2G connections in Tierra Del Fuego.


Why would you try to compress pictures....in your case zip is as good as everything else (and less problematic to open)


Ok, bad example, imagine I would be sending you ... text files or something. The point I was trying to make was how viable it is, not how useful it is.


Yes please zip there too since i don't think it would be gigabytes of text.

If sending gigabytes of text please use lzip ;)


> If sending gigabytes of text please use lzip ;)

I had to look up lzip.


I know it's not pretty much used...but it's a great format for "important information" as you can read here:

https://en.wikipedia.org/wiki/Lzip#Application

https://www.nongnu.org/lzip/xz_inadequate.html

https://parltrack.org/dumps


Or if your friends all have WinZip and can tolerate .zipx (which is a file extension for later and thus less interoperable version of ZIP), it does support JPEG recompression so it can be actually better than RAR.


Any lossy compression format it's crap.


No, JPEG recompression I meant is not lossy here; it is essentially taking the internal JPEG data structure and losslessly compressing them in more modern techniques. If you want there is also an open source algorithm [1] developed and used by Dropbox.

[1] https://github.com/dropbox/lepton


There was a time when RAR was extremely popular because it dealt very well with splitting things across multiple diskette-sized archives, and because it edged out PKZIP in compression.

Regardless of its origins and... "traditional" use, these days it has zero relevance given that Windows and MacOS (as well as some Linux file managers) allow you to handle zip files without any additional software - and also because many file formats have evolved to support built-in compression.

I do find it extremely annoying, though, when I have to help someone rifle through old backups and need to expand a RAR archive, or when someone (for whatever reason) decides to package downloads in that way (the last culprit I remember was a Chinese company who packaged their MCU tooling that way).


RAR is still the only format that has PAR2 (?) recovery records trivially available. So it's still the best for long-term archiving.


PAR2 can be easily used with any other archive format and any compression algorithm.

For example I am using it with a pax archive format (which unlike tar handles correctly all file metadata, e.g. extended attributes) and with lrzip compression (better for large files, e.g. movies, which do not compress well with more usual algorithms).

I assume that WINRAR may have a simpler interface for inexperienced users, but a knowledgeable user can easily write a pair of simple scripts with all the command-line options needed to combine PAR2 with any archive file format and any compression algorithm and arrange them to be invoked e.g. by right-clicking on a directory or on an archive file.


Yeah, I don't know why this is never mentioned in comparisons. This is nice to have feature. I wouldn't care at all about speed for long term archiving, because it's one time occurrence, but I would care whether archive format has extra safety feature against bit rot.


>So it's still the best for long-term archiving.

No it's not, lzip is it and war, as written by archive.org and the European-IT-department


Mail servers still have restriction on attachment size. RAR is the most convenient way in this situation.


Also, when mail scanners became too nosy, it was convenient that rar could encrypt metadata, so the archive could not be scanned.

It lasted only for a while. Gmail drops such files entirely for the last few years.


OS built-in zip handling is extremely limited. You're allowed to create archives and... that's it. You have no control over the compression level or anything else.

And frankly, 7z has terrible ergonomics. Both the UI and the CLI are uncomfortable and ugly.


> There is an unofficial unrar that is free and open-source software, but there is no free and open-source rar as creating one is prohibited by the RAR license.

From the license (EULA), I assume this is the relevant part:

> You may not use, copy, emulate, clone, rent, lease, sell, modify, decompile, disassemble, otherwise reverse engineer, or transfer the licensed software, or any subset of the licensed software, except as provided for in this agreement. Any such unauthorized use shall result in immediate and automatic termination of this license and may result in criminal and/or civil prosecution.

> Neither RAR binary code, WinRAR binary code, UnRAR source or UnRAR binary code may be used or reverse engineered to re-create the RAR compression algorithm, which is proprietary, without written permission.

> The software may be using components developed and/or copyrighted by third parties. Please read "Acknowledgments" help file topic for WinRAR or acknow.txt text file for other RAR versions for details.

Is this actually enforceable? If yes, are there worse outcomes apart from revoking the license? And I mean, the open source unrar, which is supposedly fine, doesn't come with an EULA like this. So it's probably not binding for reverse engineering based on the open source unrar.

edit: I mean if there is a separate patent, then it's a different issue.


Depends on where you live. It's not enforceable in my country according to a lawyer I consulted but it probably is enforceable in the USA. This "you agree not to exercise your rights" pattern is essentially standard legal boilerplate at this point and it boggles my mind that people can get away with it.

It inevitably results in the corporation retaining all possible rights and privileges while the consumers own nothing, can do nothing and can have whatever little they have taken away from them if they don't behave.


What confuses me is how does WinRAR continue to exist as a profitable commercial entity? Are there actually enough companies out there paying licensing fees to them for using it? I can't imagine that being the case.


> it's probably not binding for reverse engineering based on the open source unrar.

I think the "secret sauce" is the compression-specific code that allows it to reach the high compression ratio which is one of the selling points of the format. Reverse-engineering the decompression code will allow you to create valid RAR files but may not give you any clues as to how to reach the high compression ratio of the original compressor.


It looks like newer versions of unrar aren't open source. https://en.wikipedia.org/wiki/Unrar


Does anyone know why RAR is better compression-wise than the two more modern standards?

It doesn't seem like it's been updated in a long time, and I would have expected modern software from two of the best houses in the world to handily beat it. Is it that good, or is it a matter of priorities, or something else?


> Does anyone know why RAR is better compression-wise than the two more modern standards?

It simply uses a much slower algorithm (prediction by partial matching, PPM) in the highest setting. Those modern standards are designed to be fast enough and improve the status quo in that performance target. If they were only concerned about the compression ratio there are tons of other algorithms that would handily beat RAR already.


7zip also has a ton of settings where you can get it in the same ballpark of compression ratio. But LZMA2 usually consumes more memory to do better. It also takes a decent amount of time the more you fiddle those settings. The defaults on 7zip are fairly tame but 'good enough'. I found in my cases if you turn on solid archiving and duplicate file matching you are usually just as good as RAR or usually better. But costs speed.


> It doesn't seem like it's been updated in a long time

It is actively maintained. https://www.rarlab.com/rarnew.htm A view years ago, with rar5, it got a modernised file-format.


UHARC is more efficient than any of them, but no one uses it.


UHARC looks abandoned and does not support 64bit OS

My memory might be bad on this, but UHARC niche was media files, while rar was good tool for everything.


>>Does anyone know why RAR is better compression-wise than the two more modern standards?

Because RAR was original compression algorith(I suspect - collection of different algorithms that are applied for different cases) and those two others are based on ONE generic algorithm, which is dumb.

>>It doesn't seem like it's been updated in a long time

Because rar is perfect and what I need. I'm still using RAR files - not interested in 7z and have no idea what is the other standard mentioned. RAR has recovery record, that 7z lacks - when you have archives, that are decades old and moved from one HDD to another, where HDDs develop faults and you need to recover files - that suddently makes difference why rar is still better than 7z, when your files are corrupted in 7z - they are gone.

WINRAR License is least concern for me, because when WinRAR was created, there were different times, when there was an idea, that auhor(and maintainer) should have all the legal rights to his work(and that also includes compensation) - not some company, that is employing talents. Also idea, that your work should be free to everyone was wild idea, when software developers had to pay all the bills and eat as well. Also, closed propiertary sources were historically better for security.

RAR comes from times, when zip was dominating(and it was bad archive) - rar was better at compression than zip and it was quicker to compress and it was also supported on Linux. I have no idea what is 7z doing nowadays, but when it was developed first, it was improving zip, which nobody liked at that time. Also, 7z even nowadays havce some limitations, which requires workarounds, which is time consuming in archive creation. Anyway, none of those arguments for not using RAR seems good enough for me in especially on non open source Windows environment. The only reason for me to stop using RAR would be if Windows had access to RAR (open) source.


> WINRAR License is least concern for me, because when WinRAR was created, there were different times, when there was an idea, that auhor(and maintainer) should have all the legal rights to his work(and that also includes compensation) - not some company, that is employing talents.

The rights to WinRAR are held by a company, not an individual.

> Also, closed propiertary sources were historically better for security.

This is not true and was never true.

> Also, 7z even nowadays havce some limitations, which requires workarounds, which is time consuming in archive creation.

Like what?


This is pretty light on substance, essentially it and the post it points to says: it’s not a free license and there are better alternatives.

The post it links to saying they don’t want people to use free commercial software seems odd, I’m not sure I get it. The author thinks people won’t buy his software because WinRAR’s trial doesn’t expire?


I haven't came accross a single rar file for years. Last was when I was a teenager in the mid/late 90's downloading pirated games/app.


It seems to be still fairly widely used in Germany in non-programming environments(game mods etc.).


I've only seen the use of .rar in scene related materials. And even there it's not used as frequently anymore.


For my brothers birthday one year I bought him a license for RAR


I remember downloading a torrent of winrar compressed in rar when I was younger. That troll gave me a good laugh.


So you're the one who kept the company running!



RAR reminds me very much of Limewire/eMule/Kazaa days.


I just checked, the eMule homepage is still online: https://www.emule-project.com/


This week I was looking for a book. I couldn't find it at libgen, zlibrary, google with filetype: nor torrent. Fired up aMule, and it had more than 10 sources, downloaded in 30 seconds.


to me it goes back to Doom days


Nah, it was all about .arj then.


Oh wow, ARJ, and of course, ARC. Now I want to dial up to a BBS with my fancy 14.4K modem and download WADs via ZMODEM.


.ace and WinAce -- which died out fast, and eventually had a nasty security bug in unacev2.dll that caused it to be removed from most 3rd party utilities.

https://en.wikipedia.org/wiki/WinAce


Anyone remember .zoo? Those were the days...


I do now that you mentioned it. And there was an archive format starting with LZ* right? LZW? LZJ?

I mainly hit that one when downloading .MOD music.

(Edit, huh, .ZOO was based on LSW, so maybe I'm conflating the two)


LZW sounds familiar, LZJ doesn't.


powerpacker or bust, baby


Ive been a user of 7-zip for years. I've recently switched to NanaZip which is a fork which works with windows 11 new context menu and adds in some other niceties. https://github.com/M2Team/NanaZip


https://peazip.github.io/peazip-compression-benchmark.html Can’t beat zip’s availability even though it is quite slow… zstd compress/extract speed seems to be the best overall.


The sightly better file compression with rar is still interesting. But I suppose with today's compute when you think about the time vs space trade-off, time is more important. Space is cheap.


Is not just "slightly better" in some cases. An obvious one is repeated files. If RAR encounters a file ten times, it will compress once and store it and nine pointers. lzma and zstd guis i've tested will store ten compressed copies. It happens all the time in backups.


> If RAR encounters a file ten times, it will compress once and store it and nine pointers.

A bunch of compression formats do that. Even zip files can, which leads to interesting tricks like:

https://www.bamsoftware.com/hacks/zipbomb/


Handling multiple identical files is certainly desirable - but this is not the "core" compression algo. Its a cute optimization trick...


> "lzma and zstd guis i've tested will store ten compressed copies"

It depends on dictionary size, word size, the actual data etc.


Interesting that the warez scene adopted RAR as the standard for packaging up pirated works. The alternatives like 7z save more bytes, yet RAR somehow became the winner. Anyone know why?


RAR's initial release was 1993, 7z's 1999. IIRC, I remembered seeing RAR's in the scene around 1995, 1996 or so. My take: RAR handled splitting to multiple arbitrary-sized volumes more gracefully than ARJ/ARC which were CLI first and never had a Windows GUI that was as nicely polished as WinRAR was. SFVs made error checking and redownloading corrupted volumes easy without relying on the compression format to handle it. ACE just came too late.

https://en.wikipedia.org/wiki/Standard_(warez)

Edit: another major use case overlooked here: a considerable amount of media applications will stream files from within a RAR archive without manually unarchiving beforehand, making them more accessible from file storage sites like mega, 1fichier, without carrying the external appearance/negative baggage of an mp4 or mkv.


> I remembered seeing RAR's in the scene around 1995, 1996 or so. My take: RAR handled splitting to multiple arbitrary-sized volumes more gracefully than ARJ/ARC which were CLI first and never had a Windows GUI that was as nicely polished as WinRAR was.

Of course there were still people who screwed that up. I remember downloading some software once that came in the form of a multi-volume RAR, maybe 20 floppy disk-sized RARs. Download all 20 of them, get them in the same directory, then un-rar them, and lo and behold out the other end comes a single .RAR file that was inside! So I un-rar that single RAR, and out the other end comes 20 separate floppy disk images. Obviously the "scene" distributors weren't always the computing world's best and brightest...


What really bothers me now are movies being distributed over torrents as a multi-volume RAR.

Just...why?

The BitTorrent protocol will handle corrupt data and fix it at the segment level. You won't have to redownload an entire file.

Splitting a file into pieces made sense 20+ years ago when connections (Both your physical ISP connection and the logical TCP connection) were unstable, web servers didn't always support download resuming, and software didn't handle graceful unexpected disconnections, but those days are long behind us. We transfer data over encrypted channels that include checksums at the packet level. We use software that can handle disconnections, automatically reconnect, and resume where it left off, not to mention detect when data went bad and re-download just the bad part.

There's just no damn reason to split a 5 GB .mkv into 100 50 MB .rar files which will then take my poor RPi 15 minutes to decompress.


I think what we're seeing maybe can be chalked up to distributors adhering to their group's internal rules that just haven't changed since the 90s. Another likely "Scene rule" thing that lasted a long while were movies that were encoded aiming for a particular size. The quality was tuned up down such that every release was exactly 700MB. Why 700MB? Well, turns out there is an ancient form of data storage called CD-ROM which is limited to 700MB, and the rule (or habit) to aim for that size just never changed.


> Well, turns out there is an ancient form of data storage called CD-ROM

Heh, "ancient"... Geroffmylawn, you punks.


It's just a part of the scene. Momentum is a hell of a thing to stop in that respect. I don't know if 7zip or any other format will spit out chunked files in 50MB or 100MB part files for "easier" distribution. I say "easier" because it's the poor mans format to send files without using offsets to resume downloads.


Most likely because it is older. WinRAR was first released 1995 according to Wikipedia, 7-Zip 1999.


I’m not fully convinced of my own position here, but I have long suspected that “zipping” at all is an abstraction that shouldn’t really be exposed to the user. Like it’s a throwback to an earlier era.

The basic trade off is size for speed, and it seems like most cases it could handled automatically at the level of the file system maybe. Like if I have a big file that I want to send, it sort of feels like the software I use to send it should do the work of preparing it for transport to the target system. Similarly, if there are files that aren’t being touched very often shouldn’t the file system figure out that file can be in “small, slow” mode? Or maybe if I know a particular file should be always fast or always small, I should have the option in the file properties for it to be Automatic, Always Small, or Always Fast, like a toggle right near the permissions controls. But not like a separate program that generates separate, smaller, slower files of a special type. Why can’t folders be automatically treated like singular archives when I try to do operations on them for which that makes sense? Why do I have to think about these details?

I’m probably missing some important use cases and archiving features, but maybe those can be broken out from the default of “do the obvious thing automatically.”


Folders can be compressed archive files with the right OS/fake filesystem combination, although that's mostly ease of use.

The main 2 reasons for an archive file are giving to someone else, or compression. The FS can do compression these days, so it's mostly about giving 1 file to someone else, not 1000.

Using archive files to avoid excess file wastage seems like a bad idea, make the FS better instead


> Why can’t folders be automatically treated like singular archives when I try to do operations on them for which that makes sense? Why do I have to think about these details?

Operating systems do blur the boundaries of this a bit, now -- you can explore zip files as if they were folders, mount disk images from files, and you can work with folders as bundles (e.g. a macOS application is a disguised bundle, an RTFD document is a directory with contents etc.)

Some applications (like mail apps) have always done a good job of handling (combining, compressing) folders so they can be moved/sent as files.

The problem is all of these application things are probably best done at a GUI level. At a filesystem/command line level you want the distinction to be explicit.

Of course a few OSes ditch the distinction entirely and concern themselves only with objects -- OS/400 for example. Which might be closer to the world you are imagining.


> Operating systems do blur the boundaries of this a bit, now -- you can explore zip files as if they were folders, mount disk images from files, and you can work with folders as bundles (e.g. a macOS application is a disguised bundle, an RTFD document is a directory with contents etc.)

Also, AIUI, "Save entire page" in some browsers. You might think that should be a single .html file, but it's usually an .html file plus a directory of stuff linked in that file / on that Web pge, that the browser (and sometimes the OS) then does its best to pretend is just a single file.


You can write code to do anything, and therefore anything can be viewed as something else. However, all abstractions are leaky, so this is rarely usable in practice, since those leaks trip you up all the time.


I tried Peazip for Linux but it didn't quite work for me. I remember it as kinda buggy, and anyhow, it works like a Gui wrapper that uses external console binaries to perform actual compression/decompression tasks rather than having the routines built-in.

Regarding zstd: I would love to use it cross-platform, but zstd for Windows is still kinda exotic, and yes, it misses proper Gui applications. Now, if Winrar would support zstd... :P :P


I typically don't make use of standard archive formats these days for my own file storage. If I want just pure maximum compression, I'll often use a long-range matcher like FreeArc's srep [1] or lrzip [2] combined with either fast-lzma2 [3] using a p7zip fork [4] for multithreading and fast compression or use mcm [5] or zpaq for max ratio with longer compression time.

However, my truly preferred way is using dwarfs [6], which features some really good deduplication and (by default) zstd compression while being mountable. Most of my files are highly compressed and easily accessible without needing to full decompress them. I even made a small script to convert and create AppImages that instead use this [7]. Admittedly, I don't make use of PAR2 or anything of the sort, but I could just do that the traditional way if I so wished.

[1]: https://github.com/Phantop/srep

[2]: https://github.com/ckolivas/lrzip

[3]: https://github.com/conor42/fast-lzma2

[4]: https://github.com/jinfeihan57/p7zip

[5]: https://github.com/mathieuchartier/mcm

[6]: https://github.com/mhx/dwarfs/

[7]: https://github.com/Phantop/appdwarf/


The worst part of rar (and zip) is that they don't specify filename encoding so you'll end up with a encoding hell.


I have not used or even opened a .rar archive in probably 10 years. I am absolutely surprised about the amount of comments of people actively using it or even saying anything good about WinRAR. Wow. Disclaimer: I use Windows and Linux daily.


All things considered, there are no reasons left to use RAR.

I sometimes run into RAR files that 7zip thinks are corrupt but WinRAR opens just fine. That by itself means I keep it installed. It doesn't hurt that the GUI is more pleasant.


I believe "us[ing] RAR" here means the creation of new RAR files. In fact your experience strikes me as another reason to actively avoid RAR; there is no guarantee that your RAR archive can be decompressed with third-party softwares other than WinRAR itself.


>>In fact your experience strikes me as another reason to actively avoid RAR;

This is only your reason.

>>there is no guarantee that your RAR archive can be decompressed with third-party softwares other than WinRAR itself.

I'm perfectly fine, that my personal files would not be opened by someone, who is actively avoiding RAR(in my personal use - after rar was lha, that I used for archiving, because no one else knew what was that). And if I am sending my files to someone, then they will sure know how to use RAR.

What others will do with THEIR PERSONAL FILES - does not concern me at all - and should not concern author of article, that heavily mixes public and personal use of archivers. If some of the points might be applicable to public use - perhaps, but it is questionable, that opinion of blog writer can have any authority on this matter, where everyone will be doing what is best for them, so the actual true value(where it is only worth to "prove" others how you are right and other are not) of this high opinionated article is ¯\_(ツ)_/¯


Right. Because I'm a fan of not starving to death myself, I have no problem paying people for goods and services and hence don't agree with the "closed source is a sin" stance of the author. RAR is both faster and more pleasant to use on a daily basis than 7zip, so that's what I use for all my own files. If I'm going to host data or send it to someone who I know uses 7zip, then I'll use that. Generally though when I have a need to send compressed files I have no idea what the recipient has installed so I'm forced to use regular zip files.


For all of my compression needs, i usually go for 7-Zip for longer term storage (since while it is noticeably slower than the alternatives, the ratios can be really good for formats that compress well, e.g. anything other than images or videos).

I also use ZIP for when i need good compatibility and relatively fast compression (since zip/unzip are easy to use as far as CLIs got and 7-Zip also supports the format, as does whatever my *nix distro at the choice has for a GUI compression program).

I sometimes use tar with GZip for when i just want to temporarily compress something to move between servers without worrying about file permissions etc.


I have never by choice used rar /or/ 7z. I'm not a heavy user of compression, just the occasional "compress for transport", so to me any savings in file size is an unnoticable rounding error, even 15 years ago. Far more important to me is the simplicity of having tools for compression/decompression built into pretty much any computer that might receive my file, and where the receiver of the file, who may not be technologically adept, knows what it is without me having to explain that it is a compressed file and they neeed to get some tool to decompress it.



I used RAR all the time in late 1990s-early 00s. It had great usability, great FAR Manager integration - unsurprising since they both were created by the same person, Eugene Roshal (incidentally, he was studying a few years earlier on the same faculty / specialty as I).

But when I switched to GNU/Linux, the reasons to use RAR died quietly. It had a good run, but didn't get the widespread platform support it needed to succeed as an archive standard. It being a proprietary product didn't help in that, I guess.


I've used winRAR for a long time because of the "self-extracting executable" feature. You can set it up to create an exe (instead of a rar) that extracts files to a particular location and then runs arbitrary commands. It's like a super simple ghetto-fabulous installer and it just works.

Now I wonder what else can do that as easily in a windows environment? I haven't looked around because I haven't needed to but now I am curious. And yes, the winRAR UI is getting a bit long in the tooth.


7-zip also supports SFX archives. Its implementation also supports a post-decompression command, which is to my knowledge not yet implemented in the 7-zip UI but you can write a small batch file to make one.


Inno Setup (https://jrsoftware.org/isinfo.php). Probably not quite as easy as WinRAR, but close. It comes with an IDE with a point and click wizard for simple installers (i.e. just copy files from A to B).


You can have that with updates using Squirrel; https://github.com/Squirrel/Squirrel.Windows.


"Furthermore, the small differences in the resulting file size matter less over time as bandwidth is increasing and increasing fast."

Firstly, this argument, if true, would add up to: if you're on Windows, don't use anything other than ZIP.

The argument is not true, because although increasing storage and communication resources make compression less relevant for small files, there is content which is getting larger because of available bandwidth, including aggregations of smaller content.


There is open source support for RAR in unar:

https://theunarchiver.com/command-line


I've encountered RAR files unar cannot open. It seems there are several versions of the RAR format, and unar only understands the older ones.


Hmm, I thought unar supported the latest versions, while the open source version of unrar was the one that didn't support the latest versions.


One benefit of RAR is when dealing with multi-file archives, you don't need all of the files to be able to decompress one of them. You don't even need a complete file to be able to look inside!

Yes, some of your files will be broken if you don't have the entire archive, but at least you can salvage some of your data. Or you can peek in to make sure you've got the right thing before committing to multi-gigabyte downloads.


Yes, it is pretty annoying if someone posts a .7z.001 file on usenet. You can't peek inside. You can't even see if it's password protected until you downloaded the whole thing.


If you're using compression, and you have more than one core, you should consider trying pbzip2 - it's parallel http://compression.ca/pbzip2/

7zip may also do something similar, but it may depend on the archive type it's creating.


bzip2 is almost always the worst option. it is significantly slower and results in larger output than xz, which is in turn slower and larger than 7zip (lzma2).


FWIW libarchive and its commandline tool bsdtar, which are widely available and often installed by default on Linux hosts, is able to unpack most rar files.

Other than that I agree with the post, but if you have a rar file and need to unpack it that's often good to know.


Windows has included libarchive and bsdtar by default starting with the April 2018 Update, so they can also be used there to unpack most rar files.


I was hoping this would be about OAuth 2.0 Rich Authorization Requests[1]!

[1] https://datatracker.ietf.org/doc/html/draft-ietf-oauth-rar


PeaZip (the GUI app linked in the article) does support [ARJ](https://news.ycombinator.com/item?id=30466484)...


Here I thought it was going to be because the RAR tools are all still single threaded and get their pants blown off by anything that can spread the work across the cores on any modern chip.


The main feature from WinRAR I don't see in other GUIs is the ability to search within file contents (text) inside archives.


on a related note, Keka is a good (and free) gui tool for mac that supports wide array of archiving formats.


I'm eagerly awaiting a technological breakthrough in storage that renders compression pointless.


Meanwhile, I keep saving the WinRK installer from my antivirus software.


Haven't use RAR in ages, i asume was not a popular format anymore.


The argument on increasing bandwidths is evil.


Oh the irony of recommending formats made by Google and Facebook - two of the largest enemies of the internet.


the only remaining usecase for rar are warez FTP's


WinRar at least is a much better UI than 7-Zip.


That's debatable, 7zip UI is servicable.


[flagged]


Hope you don’t enjoy tetris…


and?

Everyone is using nginx and that was made by a Russian, does that mean we should stop using it?

This is why 'guilt by association' doesn't work with your 'point'.


What if the parent comment mentioned this instead:

Also WinRAR is a closed source Russian software.


To be fair so is AIMP.


To add to this, it may also be worth noting that Russian developers have contributed quite heavily to Linux, BSD, Microsoft Windows and Solaris. I can only assume also MacOS.


I forgot about Nginx. Thank you.


7z is also Russian.


Don't listen to peasants, use RAR


> Don't listen to peasants

Is this to be read as a Czarist "Who cares about those mushiks?" or as a Stalinist "They're all kulaks anyway!" ?



If you have half decent internet connection, Compressing and Decompressing the files is going to take more time than simply just sending it or downloading it.

I still remember we use to compare compression tools and format on compression ratio. But I cant record the last time I care about any of these any more. If it needs to compress just use zip. The same happened to Audio Codec as well.

Video Codec is where we still have lots of work.

And it is sort of strange RAR comes up, something we used to use everyday and is now for most people completely forgotten.


> If you have half decent internet connection, Compressing and Decompressing the files is going to take more time than simply just sending it or downloading it.

This might be true for transmitting files 1-to-1 (and even then, for every connection speed slower than memory there is likely a compression algorithm that makes sense) but most files are distributed 1-to-many which is why algorithms like deflate that might be slow to compress but are fast to decompress make sense.

For example, Linux distributions would not keep switching between gz, bzip2, xz and now zstd for their packages if the choice was pointless.


It is just an observation I have for quite some time. Right now iOS and macOS spend more time decompressing and verifying the update / download than the time it spend to do transfer. I understand not everyone has fast interest, but someday we may want to start optimise for end to end time rather than just downloading time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: