Hacker News new | past | comments | ask | show | jobs | submit login

I haven't used fast.ai yet, but extensively use pytorch for anything to do with deeplearning. (it is so much better than tensorflow) It's great to see a keras like higher level abstraction library for deep learning.

A few questions:

1. what is the benefit of using fast.ai to someone well acquainted with pytorch (for academic use)

2. how well does fast.ai interface with pytorch itself ? Can parts of program be in fast ai and other parts be in pytorch ?

3. Am I correct in assuming that despite being very fast, fast.ai is still slower (even if marginally so) than pytorch itself ?

1. Less boilerplate, so you can focus on your algorithm; best practices built in, so you might be surprised at how your speed and accuracy may improve

2. Very close integration with PyTorch. fastai is designed to extend PyTorch, not hide it. E.g. fastai uses standard PyTorch Datasets for data, but then provides a number of pre-defined Datasets for common tasks

3. fastai is not slower than PyTorch, since PyTorch is handling all the computation. It'll othen be faster than your handwritten PyTorch however, since we went to a lot of effort to use performant algorithms.

Thanks for replying.

From appearance alone, everything about fast.ai seems to be positive with no obvious negatives over native pytorch.

I often require to train baselines for my projects. That will be the perfect place to start using fast.ai. I am sold.

I have heard a few people lament the lack of keras in pytorch. Fast.ai also appears to take care of that.

Thanks for the great work.

1. You get a state-of-the art training loop that includes the latest research findings such as 1-cycle.

2. Yes. You can make everything project-specific if you want to. e.g. Image segmentation datasets are just slightly modified Dataset classes.

3. fastai extends PyTorch in a very Pythonic OO sense, so I think the only speed issues could come from that, and maybe maintaining a few extra dicts in memory. If it's about i/o, probably not, if it's about parallelization (for nlp), definitely not. In fact, I can't think of a way in which there's a significant speed penalty.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact