Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Deepserve.ai – Heroku for Deep Learning (deepserve.ai)
7 points by jeffrwells 12 days ago | hide | past | favorite | 8 comments

Hi HN,

I’m Jeff Wells. I am launching https://www.deepserve.ai — a platform to deploy and host machine learning models.

Deepserve makes it extremely easy to take a trained model and deploy it with a single CLI command: `deepserve deploy`

We host the model, deploy it, and give you an API endpoint you can call on to make predictions from your applications. We manage the devops and dependencies and ensure your model is always running. We’ll scale up as much as your application needs, and scale down during off-peak hours to save you money over renting your own servers.

In order to make it easier to use, we have client SDKs to make calling your model a single line of code. We store all of the production inference data so you can grow your training sets with real examples.

OP here.

We’re now open for beta! I want to onboard companies slowly as it’s critical our infrastructure is robust. Out of the gate I am supporting Fast.ai and working hard to support PyTorch and Tensorflow.

If you don’t have the skillset right now to build your own models — reach out on https://www.deepserve.ai/consultancy or email me. I can build models for you and host them Deepserve. I’ll be transparent that I want to make money on hosting, and would love to take your use case and turn it into a case study.

As an example — I have a client on the platform right now using a text classifier that auto-tags and auto-categorizes their support emails. These emails then get routed and sorted automatically, with the most urgent cases alerting them via slack.

This has been a side project for over a year, and am finally ready for launch. Feedback is welcome and I’d love to connect with anyone who aligns with the mission of bringing machine learning to the world.

You can reach me at jeff @ deepserve.ai

All three of the big cloud providers have a solution for ML model deployment and support more libraries than just Fast.AI. What are some of the reasons one would use Deepserve.ai vs. the other cloud providers?

This is really neat though. The fact that this is so easy to use will be a big appeal to a lot of data scientists who don't want to write production code or deal with lots of bootstrap configuration. There are also a lot of benefits of abstracting the deployment as you can seamlessly add a lot of features like logging, or even make performance improvements by tweaking a few env vars and everyone will get it by default! Thanks for sharing.

Great question!

My goal with Deepserve.ai is to be drastically easier than the big cloud providers, with less lock-in. Rather than giving you infinite configurability with low-level primitives, I want to give you an easy, scalable solution right out of the box. Sagemaker and Tensorflow Serving have some quick-to-deploy options like this, but are really built in expecting you to use their entire ecosystem.

I started off with Fast.ai for simplicity and want to expand to support PyTorch, Tensorflow and ApacheMX. My aim is to have an option that is tested and proven for every library version and every mix of inputs/outputs. So if you have a PyTorch 1.5.0 model that expects an image and returns a set of bounding boxes, you choose that configuration and have everything ready to go.

You're right on with having things like logging and tuning built in for everyone from the get go.

I will obviously never be cheaper than the big cloud providers. I want to compete on ease of use and the soft costs of hiring for or dealing with devops.

Thanks for your kind words!

Hi Jeff, I'm not your target customer, but I could become one if I ever launch my own startup. So I'm curious about:

What's your background? How did you come up with this idea? How did you get started? How did you get funding? How do you market this product (other than posting on HN)? How much do you work on it (what's your day like)? When do you plan to become profitable? Are you worried about competition (e.g. Amazon offering an easy way to deploy a model)?

I'd love to share, thanks!

My background is as an application developer (RoR, React, Python) and fell in love with machine learning. I wanted to use ML at the startups I worked at but with everything involved it would become a bigger project than anticipated and fall out of priority.

The aim with Deepserve.ai is to make shorten the cycle enough that you could train and deploy a prototype model in a day and a production ready model in a week (if you had the data ready).

I am actually still part time and bootstrapped — I spend a few hours a day hacking on the platform and am starting to market to ML practitioners and sales into enterprise. It's definitely a lot of context switching — from React to CLI writing to AWS devops to marketing.

My aim is to be profitable out of the gate by having pricing with sufficient margin over my underlying compute cost. I'll definitely be more expensive than doing it yourself — but the point is that you don't have to do it.

If you launch your startup, definitely reach out, I'd love to add AI to your product! And if you want any advice on your startup, I'd love to connect, this is my third startup and I've had a couple of successful exits so might have some ideas!

which gpus do you use? is the pricing ($1 per 1000 requests) independent of inference time and bandwidth? for instance, some of our models finish within 2 seconds while other models take ~60 seconds, depending on the input. we have been searching for something like this for a long time, but all the other options were lacking in one way or another.

Hey panabee, pricing is definitely something I want to iterate on. I wanted to start off simple and then move to tiered pricing based on what type of model you're running, whether you need CPU/GPU and tiers of execution time. Let's definitely connect as I'd love to serve your use case

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact