Three lines, everything else is abstracted. You would have ai.vision and ai.text for a start, with some accessors to other state-of-the-art models (like ai.vision.imagenet if you know better what your are doing)
The underlying models would be shipped and updated with newer versions of Python.
Not exactly. The problem is that different models are tuned for different use cases. So I'd expect something like this:
$ pip install pymodel-ResNet-1337
#! /usr/bin/env python3
import ai
model = ai.loadModel("ResNet-1337",input=ai.input.Image,output=ai.output.Category)
model.guess(open('myimage.png'))
Problems abound, for example, how would you resample/rescale the image.
I'd want, to start, a standard model format that can be serialized/deserialized into any language that can be (for example) pip installed and loaded. People seem to use HD5 but I don't think there is any sort of "standard".
So I'd expect the first incarnations of this idea to look like this:
$ pip install tfmodel-ResNet-1377
or
$ pip install kerasmodel-ResNet-1337
With some hooks for loading models:
#!/usr/bin/env python3
# whereas before you'd build a network, train it, and then use it, here you get the whole shebang in one go
model = keras.loadInstalledModel("ResNet-1337")