Hacker News new | past | comments | ask | show | jobs | submit login

All three of the big cloud providers have a solution for ML model deployment and support more libraries than just Fast.AI. What are some of the reasons one would use Deepserve.ai vs. the other cloud providers?

This is really neat though. The fact that this is so easy to use will be a big appeal to a lot of data scientists who don't want to write production code or deal with lots of bootstrap configuration. There are also a lot of benefits of abstracting the deployment as you can seamlessly add a lot of features like logging, or even make performance improvements by tweaking a few env vars and everyone will get it by default! Thanks for sharing.




Great question!

My goal with Deepserve.ai is to be drastically easier than the big cloud providers, with less lock-in. Rather than giving you infinite configurability with low-level primitives, I want to give you an easy, scalable solution right out of the box. Sagemaker and Tensorflow Serving have some quick-to-deploy options like this, but are really built in expecting you to use their entire ecosystem.

I started off with Fast.ai for simplicity and want to expand to support PyTorch, Tensorflow and ApacheMX. My aim is to have an option that is tested and proven for every library version and every mix of inputs/outputs. So if you have a PyTorch 1.5.0 model that expects an image and returns a set of bounding boxes, you choose that configuration and have everything ready to go.

You're right on with having things like logging and tuning built in for everyone from the get go.

I will obviously never be cheaper than the big cloud providers. I want to compete on ease of use and the soft costs of hiring for or dealing with devops.

Thanks for your kind words!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: