Hacker News new | past | comments | ask | show | jobs | submit login

Not to mention the difficulties with the i/o. next generation sequencing files, depending on the analysis done and the state in the pipeline, are just plain text files (sometimes binary for efficiency) but like 5-100s of GB in size per sample. If you maybe have a thousand samples, good for statistical power, you need terabytes of storage available, a lot of memory, and a lot of fast cores, so the natural answer is to use a compute cluster and some universities have them for their researchers. Not everyone has access to $100k of parallel computing power, however.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: