
Deploying ML/Deep Learning Models to Production - prasann16
What platform do you guys use to deploy your Machine Learning&#x2F;Deep Learning models to production?<p>Do you guys prefer building a custom solution (using flask&#x2F;Docker) or using managed services like Azure, Algorithmia, etc? and Why?<p>I have never deployed models before so looking at pros&#x2F;cons of each approach.
======
WestCoastJustin
It depends on that software you are using. But, you might want to check out
[https://cloud.google.com/ml-engine/](https://cloud.google.com/ml-engine/).
This works pretty good for TensorFlow. Basically, upload your model and it'll
expose it as an API. Then you can flip through versions as you evolve it.

If you use something like flask/Docker you are totally owning the entire
pipeline and that might be a good/bad thing. By own, I mean hosting it
yourself. Do you really want to own the pipeline, are you getting anything out
of it, is this a competitive advantage to you some how? If not, you probably
want to just off-load it to something else. Then, someone else can worry about
all the production issues, and you can focus on what you're good at.

~~~
prasann16
I am not getting anything out of building my own pipeline. I am using Pytorch.
Do you know something similiar to Google ML Engine for Pytorch where I can
upload my model and it will provide an api for inference?

~~~
WestCoastJustin
I only know about the Google one. Looks like they have added support for
scikit-learn, XGBoost, Keras, and TensorFlow (in-beta). So, you might be able
to just use it by porting some of your stuff. I'd just run through the
examples to see if it'll work for what you want.

------
throw3v
What is the title of the person that deploys ML/deep learning projects (end to
end)?

My understanding is that:

\- data scientists create the model

\- data engineers do the data wrangling and data warehousing

\- devops responsible for more software engineering oriented projects and miss
(?) some of the skills that would be required for ML/DL deployment (and
debugging).

Am I missing something here or is there a gap?

Offtopic (sorry for hijacking): if anyone has experience in deploying ML/DL
projects (freelancer), shoot me an email.

~~~
avin_regmi
What kind of model are you looking into deploy? It it pytorch/tensorflow?

~~~
throw3v
Different projects with different specs, pytorch/tensorflow included.

~~~
avin_regmi
Shoot me an email avin@panini.ai I can defintely help you with this!

------
lawlorino
It depends on your use case and the type of model e.g. is it from TensorFlow
or scikit-learn or just a simple lookup table. Could you expand a little? Like
someone else has said already, it also depends on how comfortable you are or
how necessary it is to own the whole pipeline.

~~~
prasann16
It's not necessary for me to own the pipeline. I am just looking for a
scalable service where I can upload my model and it will provide me with API
for inference which I can use on my website. Have you used any managed
services like Algorithmia or Microsoft Azure ML? Do you know any other
services I can look into?

