Hacker News new | past | comments | ask | show | jobs | submit login

I'm not aware of any research lab that uses AWS for these things. It's cheaper to just buy the GPU yourself.



But you don't change the GPU yourself :-) you use it as a service. That's the good part, although the spec is not very good.


AWS is a sponsor of this, which probably means a bunch of free resources.


g2.8xlarge also only has 4GB of VRAM per GPU, which is too small for most recent deep learning models. The TitanX GPU has 12GB, by comparison.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: