However, they would be anyway: Core models and algorithms are quickly outdated and any change that allows us to achieve similar or better results with less effort in creating training data is easily worth the engineering work.
That said, I really hope v2 feels a bit more like other libraries: extending v1 models has been pretty painful in several occasions. E.g. making some changes to the underlying pytorch models was very straightforward but still using all the goodies for training build into fastai (in particular all the stuff based on the work of Leslie Smith, tuned for best practices inside the fastai universe) was pretty painful. It is awesome to have a library actually implement best practices from latest research, but sometimes all this greatness was pretty hard for me to transfer to changed models.
That said, it has worked for us in v1 and the benefits outweighed the problems by far.