the protect is very cool. As a professional deep learning researcher myself, the author couldn't made a few things to make his life more simple, and for better results. For example, using "Image translation" (encoder decoder structure) is what is used everywhere in the regular semantic segmentation models, of which there are many pretrained and already available for low computing power mobile, and which would have saved the author a lot of time by bit having to hand label images, and would likely have done a better job because they were trained on gigantic datasets instead of relatively few hand labeled images.
The normal way to do this would be to use mobile net v2 with the tensorflow image detection pipeline:
The normal way to do this would be to use mobile net v2 with the tensorflow image detection pipeline:
https://github.com/tensorflow/models/blob/master/research/ob...