Hacker News new | past | comments | ask | show | jobs | submit login

It is possible to run everything on CPU, even if we fastai 1.0. Only training can be 100 times slower than on GPU. Even for a toy exercises involving image processing and actual deep networks (30-150 layers) it means hours or days of training.

It is not FastAI fault though.






My actual use case some time ago was to run tests suites for some tricky and convoluted data wrangling code that used (by necessity since it was doing some funky tiled image loading and segmentation) fastai dataframes and stuff, locally on CPU to debug those tests... neither training, nor even inference actually, just running a useless micro-training session at the end to sanity test that data was loaded in a usable way and things didn't broke when you tried to train.

But in fastai 1.0 it was all bundled together in one big yarn, with everything depending in the end on some data loading classes that depended on GPU driver etc.

Anyway, it was really bad architecture and dev practices in the codebase I was working though, the tested behavior would probably not have matched production one 100%... I don't blame fastai much for not helping with a broken workflow, but I prefer more barebones and less opinionated frameworks, aka using tf or pytorch directly, since some times you really need to get that "broken" thing running in production before you work on a refactored version of it :P Fastai seems very research-oriented and opinionated.

I'll definitely look into fastai 2.0 though :)




Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: