Hacker News new | past | comments | ask | show | jobs | submit login

The reality is that although we've improved, we're still slow getting data reliably, analyzing and understanding it. Science is actually a very manual labor.



Not to mention the difficulties with the i/o. next generation sequencing files, depending on the analysis done and the state in the pipeline, are just plain text files (sometimes binary for efficiency) but like 5-100s of GB in size per sample. If you maybe have a thousand samples, good for statistical power, you need terabytes of storage available, a lot of memory, and a lot of fast cores, so the natural answer is to use a compute cluster and some universities have them for their researchers. Not everyone has access to $100k of parallel computing power, however.




Applications are open for YC Winter 2021

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: