Hacker News new | past | comments | ask | show | jobs | submit login

There was a HN story in which the author had the same "big data claim" (it was his intention to show there is very little big data in the real world, for most people out there) but instead he only needed to process couple GBs. He just used standard Unix commands and the performance was proven to be incredibly awesome.

Here is an example for SQLite: https://news.ycombinator.com/item?id=9359568




I think you're talking about this: http://aadrake.com/command-line-tools-can-be-235x-faster-tha...

I quite enjoyed it as well.


I used to work somewhere that did a lot of ETL and we used MS databases - mainly, ahem, Access. At the time I had no idea about *nix. I've often thought since how much easier and quicker it would be to solve lots of the processing jobs with this sort of thing, but never thought about implementation details. This is a great reminder that new is not always best and reminds me that a little knowledge is a dangerous thing!


That guy's problem set allowed him to filter out massive percentages of his data from the get go.


Wow thanks! I couldn't find it in search :-) glad you found it.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: