|
|
| | Ask HN: What is your ML stack like? | |
387 points by imagiko 24 days ago | hide | past | web | favorite | 131 comments |
|
| How did your team build out AI/ML pipelines and integrated it with your existing codebase? For example, how did your backend team(using Java?) work in sync with data teams (using R or python?) to have minimal rewriting/glue code as possible to deploy models in production. What were your architectural decisions that worked, or didn't? I'm currently working to make an ML model written in R work on our backend system written in Java. After the dust settles I'll be looking for ways to streamline this process. |
|
 Guidelines
| FAQ
| Support
| API
| Security
| Lists
| Bookmarklet
| Legal
| Apply to YC
| Contact
|
Shipping pickled models to other teams.
Deploying Sagemaker endpoints (too costly).
Requiring editing of config files to deploy endpoints.
What did work:
Shipping http endpoints.
Deriving api documentation from model docstrings.
Deploying lambdas (less costly than Sagemaker endpoints).
Writing a ~150 line python script to pickle the model, save a requirements.txt, some api metadata, and test input/output data.
Continuous deployment (after model is saved no manual intervention if model response matches output data).