Hacker News new | past | comments | ask | show | jobs | submit login
50+ hot takes on the current and future of AI/ML (twitter.com/techno_yoda)
3 points by chse_cake 13 days ago | hide | past | favorite | 1 comment





TBH I kinda agree with the argument that distributed training is too hard. Its so architecture/compute-resources/network-topology dependent that when people open those can of worms, they quickly realize that the cost/benefit tradeoff is limited unless you are doing large-scale pre-training. its just so much easier to train as much as possible on a single node



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: