|I've got a ~35GB text file full of data, and I want to parse it so I only have unique results in the end file. In the past, I've had no problem with cat FILE | sort | uniq, but I've never worked with anything of this magnitude. Even running a line count takes an extraordinary amount of time:|
time cat FILE | wc -l
Any suggestions on how I can go about getting unique records from this type of file?