You'd have to balance the input and output connection's granularity - too many would put off users, too few would make users feel restricted. If you manage to find a sweet spot, or let the user pick the level of expertise, and reveal them accordingly, it would be perfect.
I really like how good it looks, and can't wait to use it myself!
I just watched the video, and one of the first thoughts was mike matas’s 2016 video “the brain” where he made an NN within quartz composer: https://youtu.be/eUEr4P_RWDA
If I could process my catalog of videos (~30 hours) for a reasonably small amount of cash it would save me hours of mucking about with ML/OpenCV, etc... though I'd miss the learning experience.
Do you have a team implementing most of the new state-of-the-art model architectures (given how fast new ones keep getting published)?
If so, I'm assuming you keep associating some types of model architectures to the type of data being input?
I'm just curious how you'd pick a particular architecture.
On the other hand, AutoML comes to mind, but IMO, the biggest hurdle of AutoML, and its ilk is the massive computational infrastructure requirements.
But great job, it looks really good and seems pretty intuitive!
Something really interesting we have discussed for a future feature is being able to train a model using the data of which architectures end up working best for different data types so that Lobe can use AutoML to suggest better templates starting out, or on the fly while you are building the model.
Coming to community model zoo, would it be free access to any model, and pay for the training and disk usage (floydhub like)? Or you'd go the quantopian route?
The marketplace play mentioned in other comments seems like a thing to try. I confirm I would pay for querying predictions through the API.
Such a polished product for 3 folks. Kudos!
Also AFAIK AutoML alpha initially only supports vision tasks while this allows nearly any input type.
It goes the other way as well and supports generation.
From a deep-learning novice: Can you give a rough idea of the processing cost of doing something like setting up your water tank level recognition?
The architecture implemented using Lobes for object detection is called Yolo v2 (https://pjreddie.com/darknet/yolo/). It is fairly state-of-the-art for that type of problem and has ~70 million parameters that are being learned (matrices that get multiplied and added together). With a webcam and a GPU over the network, we typically see ~1-5 fps with a lot of network overhead sending output images - looking to make that faster for API deployment. The paper site above shows it having 62.94 Bn FLOPS
Awesome work and I hope to see the product grow!
deep learning studio: http://deepcognition.ai/
knime: https://www.knime.com/ (have deep learning plugin)
runwayml: https://runwayml.com/ (in beta, did not open for public)
Do you support transfer-learning, for instance, pre-trained models on ImageNet? A lot of problems have limited dataset, and can only work by training the last layers of a pre-trained model?
And do you support training on cloud-based public datasets? Uploading a large public dataset doesn't make much sense.
Really looking forward to trying your platform!
The lobes in the UI are all essentially functions that you double click into to see the graph they use, all the way down to the theory/math.
If you want more comprehensive ways to learn the theory, I highly recommend Stanford's 231n course (http://cs231n.stanford.edu/) and the Goodfellow/Bengio/Courville Deep Learning book (https://www.amazon.com/Deep-Learning-Adaptive-Computation-Ma...)