Hacker News new | past | comments | ask | show | jobs | submit login

Does it still have the requirement for a GPU (driver) to be around to even run, like fastai 1.0 had? (Had to manually comment requirements and imports to have it run CPU-only...)

I get it that for any serious use you'd want a GPU, but for learning and toying around you might want to be able to run and debug code on your freakin macbook! Is that too much to asks? (Some of us do code in IDEs, not in notebooks + vim on server, and we'd want at least our test suite to be able to run locally ffs!)

(Also, hopefully they've got rid of the lovecraftian architecture with methods that can mutate an object's class [?!] - I understood the practical appeal and why they did it, but as a software engineer with sympathy for functional-programming that almost made me wanna barf :|)

Anyway, fastai is awesome for learning and experimenting, keep up the good work! I just hate it that it's so obnoxious to use and learn for anyone with a more traditional software engineering background...






It is possible to run everything on CPU, even if we fastai 1.0. Only training can be 100 times slower than on GPU. Even for a toy exercises involving image processing and actual deep networks (30-150 layers) it means hours or days of training.

It is not FastAI fault though.


My actual use case some time ago was to run tests suites for some tricky and convoluted data wrangling code that used (by necessity since it was doing some funky tiled image loading and segmentation) fastai dataframes and stuff, locally on CPU to debug those tests... neither training, nor even inference actually, just running a useless micro-training session at the end to sanity test that data was loaded in a usable way and things didn't broke when you tried to train.

But in fastai 1.0 it was all bundled together in one big yarn, with everything depending in the end on some data loading classes that depended on GPU driver etc.

Anyway, it was really bad architecture and dev practices in the codebase I was working though, the tested behavior would probably not have matched production one 100%... I don't blame fastai much for not helping with a broken workflow, but I prefer more barebones and less opinionated frameworks, aka using tf or pytorch directly, since some times you really need to get that "broken" thing running in production before you work on a refactored version of it :P Fastai seems very research-oriented and opinionated.

I'll definitely look into fastai 2.0 though :)


I think the whole reason AI has become what it has is because these are “brute force” things you can’t do with a normal CPU. So functional programming and massively parallel algorithms are what make it possible.

Every year it gets more accessible to a wider audience. Soon there will probably be frameworks that hide the complexity completely and you can just say here’s a massive dataset, I want to train it to be a conversation bot or cat pic classifier, go. But we’re not quite there yet.


i believe OP would, for example, want to start training locally just to check for errors, then do the run somewhere remote.

Synchronizing local and remote code shouldn't take much time, but it's still at least a few seconds on the critical path for the run->fail->fix->rerun loop.

VSCode's remote mode might be a worth a try for people with such a setup.




Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: