I get it that it is supposed to be working on the very bleeding edge of deep learning technologies, but at the same time it is sold as "practical". At least I would be slightly uncomfortable doing anything in production with a library that is all but guaranteed to get no (compatible) development love whatsoever after a couple of months when the developers have started working on the next version.
But I guess it may just be a too tough nut to crack to provide a bleeding edge deep learning library with production quality life cycle support.
However, they would be anyway: Core models and algorithms are quickly outdated and any change that allows us to achieve similar or better results with less effort in creating training data is easily worth the engineering work.
That said, I really hope v2 feels a bit more like other libraries: extending v1 models has been pretty painful in several occasions. E.g. making some changes to the underlying pytorch models was very straightforward but still using all the goodies for training build into fastai (in particular all the stuff based on the work of Leslie Smith, tuned for best practices inside the fastai universe) was pretty painful. It is awesome to have a library actually implement best practices from latest research, but sometimes all this greatness was pretty hard for me to transfer to changed models.
That said, it has worked for us in v1 and the benefits outweighed the problems by far.
So instead we'll be maintaining fastai v1 as a separate branch and accepting PRs as long as people are using it. But v2 is designed to leverage a lot of the new ideas that have come up in the last couple of years, both in our research, and more widely.