Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How to incorporate machine learning into day job?
169 points by s_c_r on Dec 10, 2018 | hide | past | favorite | 53 comments
I work for a small regional shipping company, mostly building CRUD apps and doing EDI integrations. I'd like to find a practical side project using machine learning and/or data science that could add value at work, but for the life of me I can't come up with any problems that I couldn't solve with a relational database (postgres) and a data transformation step. I've spent some time learning pytorch, numpy, and pandas but I know that if I don't use it, especially at work, I'll just forget everything I've learned. My boss is a dev and is generally supportive of learning new things and finding ways to innovate independently, so if I can come up with a good idea I'm sure he'll let me pursue it in my spare time. Has anyone tried to do this before? Any suggestions would be great.



1) You could use ML to assess the on-time reliability of each of your delivery agents (contractors or individuals). Providing more accurate estimates of delivery time to customers (maybe via text message) might be very desirable. Or you could notify only when expected delivery time has changed (arriving earlier or later). For Just In Time-based shops, this could be a big win.

2) You could assess different delivery routes/ regions to determine if they are more/less on-time than other routes/ regions. Are the number of delivery vehicles adequate? When should you adjust the number of vehicles or change the routes themselves (like moving some peripheral regions to another route, or adjusting the cost charged when delivery is delayed).

3) When do external factors (like weather, esp rain or snow) introduce delays? Can you predict these delays, and ideally, compensate by changing routes or adding more delivery vehicles?

4) Should you more dynamically adjust your shipping fees to reflect faster/slower delivery time targets? This way you can tune your routes and manpower to save money for those who aren't as time sensitive, and improve the response time for those who are.

A lot of this is basic operations research. But you can call it AI, or use AI techniques just as well as traditional OR methods. Nobody will care what math/methods you use if you can add value.


> 1) You could use ML to assess the on-time reliability of each of your delivery agents (contractors or individuals)

Keep in mind that there may be flow on effects and ethical considerations. Once you assign a metric to individuals, someone is going to start attaching it to KPIs, ranking individuals by it, and ultimately firing individuals by it.

Ethics is an increasingly prominent aspect of ML.


This is so cool. What are some of the traditional methods used to analyze this kind of data?


While I can't speak to what traditional methods are for this type of analysis, I do have some tangentially related experience. I used to work in supply chain at a massive consumer goods company, and was part of the user rollout for software that touched on some of the topics mentioned above.

The "AI" software basically just applied common sense and basic regression modeling to the situation. Originally, SAP (used for resource planning) took input data at face value - i.e. if a carrier said they have X capacity along Y route with a 7 day lead time, that was statically entered into SAP and all dependencies on that information took it as gospel[1]. The secret sauce of the new system was that it looked at actual data in the system to calculate those variables. It'd pull raw transaction data out of SAP for stuff like when a pickup was requested/scheduled, when a pickup actually occurred, when a delivery was anticipated, when it was actually marked as delivered, etc. Run some basic SQL and regression analysis over that, then override the static values in the system so they were more realistic.

Not super sophisticated, but had a pretty substantial impact at scale. Unfortunately in the US specifically, that impact was negative, due to certain cultural traits that impacted user adoption[2]. But in most other regions globally, it was integrated into the planning process much more successfully and drastically improved forecasting accuracy.

[1] This was how it was configured where I was at, at least. I'm not familiar enough with SAP as a whole to know if this was a peculiarity of that specific SAP instance or if it was a limitation of SAP itself.

[2] I came onto the team right after initial rollout, at which point every region globally was showing positive returns from the rollout. Except the US, which had actually shown negative returns (not just neutral). My role was to basically "redo" the US rollout, and almost all of the issues were based off of cultural differences in how work is approached.


Any chance you can elaborate on [2]? Sounds interesting.


This is a common problem in tech:

"I've just learned a neat new tool but I never apply it because I can solve all the problems in front of me with the tools I have."

In effect, you've found a local maximum where every direction seems like a step backwards, or an investment of time without any reasonable payoff.

Here are two general strategies to deal with this:

1. Take a well understood, and well documented existing need, and replicate the solution with the new toolkit. Acknowledge from the outset that this will be a step backwards, but go through the details anyway to better understand the technology. The goal isn't to make the system better, but to improve your understanding of ML and it's real world application. By choosing a well understood system, you are only learning applied ML rather than trying to simultaneously learn ML and the problem. Work toward parity with your existing methods. This part is rarely a big step forward, but I guarantee that this process will generate 100 good ideas about where to go next.

2. Find problems that were previously ignored, because they couldn't be solved. Something no one is even thinking to ask for, because none of the prior tools could do the job. This is the ideal situation because you are in a greenfield space where anything is an improvement. For ML specifically look at anywhere a lot of data is being generated but no one has the time to read it all unless something goes wrong.

When learning any new technology there is always a gap between learning it in the lab, and trying to execute with it IRL. The best way to maximize your own ability is to simply start applying it and building experience. Don't wait for a perfect halo project.


Using a broad definition of ML/data science, here are a few ideas:

First, coding toy problems (related to shipping or not) that implement linear regression, genetic algorithms, or neural networks, etc. will be a useful start

Analyze shipping and tracking EDI data to predict whether a shipment will be late (0.0 to 1.0 output, 1.0 being it will be late for certain)

Predict the likelihood a customer will churn (stop using your services) based on changes in volume, billing amounts, and other characteristics

Predicting this year's peak season shipping volume based on past years' data. See if you can beat the marketing/sales folks' predictions

Identify factors correlated with the most profitable shippers

Predict the likelihood a package is damaged

Use a genetic algorithm to improve driver routing

Reconfigure pickup times / drop off times to improve profitability

Use EDI shipping data to build a network graph of who is shipping to whom, segmented by type of some sort. Say you find that many A-type firms are shipping to B-type firms; any B-type firms that are not already customers could be interesting targets.

Score prospects to estimate their profitability by comparing their characteristics to existing customers' profitabilities

Use a neural network (or something else) to analyze EDI shipping data, damage data, and make packaging recommendations to customers

Analyze tracking EDI data, segmented by delivery area (zip+4?) and see if there are areas where drivers are more efficient at delivering faster. Maybe start an initiative to look at what separates the most efficient drivers from the least.

Reporting: not sexy, but really useful in this space

Bona fides: I used to work in the supply chain consulting space and consulted at firms like yours. Things are surprisingly basic in the shipping space - less meaty data science than one might think.

Edit: Formatting


Thank you, from reading your post it has given me some ideas to play around with applying some of these incident data.


To get started, I'd pick the most important business problem you have and then solve it using the simplest machine learning approach.

You mentioned using Pytorch. Instead, I recommend a classical machine learning using a library like scikit-learn (https://scikit-learn.org/). Use a random forest classifer and you'll get pretty good results out of the box.

If your data is in a postgres database across multiple tables, you will likely have to perform feature engineering in order to get it machine learning ready. For that, I recommend a library for automated feature engineering called Featuretools (http://github.com/featuretools/featuretools/). Here's a good article to get started with it (https://towardsdatascience.com/automated-feature-engineering...)

Finally, you will need to define a prediction problem and extract labeled training examples. I see people in this thread have suggested ideas of problems to work on. The key here is make sure that you pick a problem that you can both predict and take an action based off the prediction. For example, you could predict that there will be an influx of shipments to fulfill tomorrow, but that might not be enough to time to hire more people to help you fulfill them.

If you're curious what the process looks like end-to-end check out this blog series on a generalized framework for solving machine learning problems that was applied to customer churn prediction: https://blog.featurelabs.com/how-to-create-value-with-machin...

Full disclosure: I work for Feature Labs and develop Featuretools.


My general tip is to look for things that are almost but not quite automatable, where a human needs to do a quick look-over.

One big example is fraud: it's next-to-impossible to define a 100% accurate set of rules to filter fraud, but it's often easy to train an algorithm to catch the worst offenders, or flag suspicious cases to significantly narrow the amount a human needs to review.


Find a system or process that uses a series of rules to categorize, label, or action things, especially one that is occasionally incorrect. Model these rules with a ML algorithm using the rule outputs & user corrections as labels. See if you can build an ML system that out-performs the rules. (If you can, it'll probably be by looking at data that the rules didn't consider.)

Note: your ML system will likely be less explainable than the existing rules. This won't matter as much if the current rule collection is already more complex than a human can deal with. It will matter a LOT if your decisions are subject to regulation.


  > I can't come up with any problems
  > that I couldn't solve with a
  > relational database (postgres) and
  > a data transformation step.
Congratulations, you have seen through the hype! Most "machine learning" claims you see are solvable just with linear regressions on slightly cleaned up data.


Agree. However, Linear Regression IS machine learning (it's a supervised classifier). Most AI/ML is 80% data integration, 15% problem scoping and 5% algorithm selection.


> pytorch, numpy

If your problems aren't audio / image based, then consider using traditional ML instead.

If you are just starting out. Check out SKLearn, Scipy and graphical models like CRFs. They are tried in tested methods that also require less specialized skills.

As someone else said, a lot of AI, ML tools are simply repackaged old school OR methods. The older methods get 95% there, with <50% of the effort.

Cutting edge ML isn't required for most problems. Especially non visual or time series problems.


If you explore the word2vec family of algorithms, you can improve text search by pulling in external datasets. E.g. use a model trained on Wikipedia to find synonyms, or build a neural network that maps user search terms to documents in your database.


If you're looking for problems to solve in the transportation / shipping context, one that comes to mind is estimated delivery. Try predicting the day (maybe even down to the hour) of when something will arrive at its destination once it enters your company's network. It may require feeding it the origin and destination, the product mix in the trailer, customer priority, weather conditions, time of year (peak season?), etc.


Oh yeah, I can see how this one could be doable. Has practical application too, if it can be made accurate enough.


There must be someone in your company doing some form of business analysis? Probably in spreadsheets. Talk to them, especially about aspects of their work that involve probability. Don't expect to find a problem that looks exactly like one in the Tensorflow tutorial; you may have to get creative and you may have to do some maths with a more old-school flavour.


Start somewhere simple - like using ML to make search easier. Try using a collaborative filter to improve search for your company - even by doing something as simple as updating the placeholder text.


I like this idea, thanks.


I suggest it because that's exactly how I put my first ML model into production a couple of weeks ago. All my model does is update placeholder text to give suggestions to users based on previous actions. Took about 3 hours from start to finish: write training script, run it on cron, throw result in a cache, add API endpoint for accessing cached model, fetch model on frontend boot and store in state store, and integrate with search component.

https://updates.moonlightwork.com/updated-skill-search-now-w...

This book is kind of dated, but it's what I used for inspiration and reference: https://www.amazon.com/gp/product/B00F8QDZWG/ref=oh_aui_d_de...


Programming Collective Intelligence is a fantastic book. I literally couldn't believe how many cool things it has throughout it.

It doesn't assume much prior knowledge and guides you gently through solving a bunch of data problems.

No affiliation, but its a great book regardless of it being a bit old (code is easy to update, good explanations are timeless).


This is a great idea for a simple application of ML. Thanks for the tip!


Great idea to get started, thanks!


> for the life of me I can't come up with any problems that I couldn't solve with a relational database

It sounds like you have the right tool for the job now, so great. Keep using it. Dependencies should be added to projects as conservatively as possible. The best dependency is no dependency. You shouldn't go seeking out dependencies. Your app doesn't depend on machine learning, so why would you make it depend on something it doesn't actually depend on? Future maintainers (including yourself) would hate you for it.


Try implementing a genetic learning algorithm to send and reply to your work emails. For the fitness score try using your yearly salary in dollars. I haven't yet implemented this, but theoretically there's no upper bounds to how much money this will earn you.


Then everyone will act like management and the company will go broke? :P


Some CRUD applications have scope for adding recommendations... based on other things the client has create d predict and auto-populate fields in advance or give them recommendations if changing some fields in existing records can help them. Again, it will depend on type of CRUD application.

There's also scope of using ML in analytics and monitoring side apart from the main application and is generally better tolerated by the product team.


ML is not a glue gun. CRUD apps need glue guns. You need to find a place that needs a sorting (classifying) machine.


Here is some marketing material that may spark some ideas:

https://www.ibm.com/watson/supply-chain/resources/csc/deskto...

One concrete example IBM likes to talk up is predicting shipping delays due to weather events and automatically recommending alternate suppliers.


Pick a report with lots of numerical columns that your company is currently having accounting or the CFO "interpret" into a go-no go decision. Implement logistic regression.

There may be a way you can do some computer vision tasks for quality control in some parts of your business -- most businesses that deal in physical goods have quality control by visual inspection and in most of those you can end up with a CNN that provides a quick enough, good enough solution. However, sometimes for regulatory reasons it's not practical, or it's something that is not a critical part of the chain, and so on. But you could ask operations staff about whether they sometimes do that kind of task, and whether it takes up a lot of their day. It's not like you have to find the good idea alone.


I am a beginner too. Below is a problem I am recently working on. You can work on it too if you think you have a lot of PDFs in any part of your shipping business. I started attacking this to mainly learn python and uses python ML libs.

My problem statement includes classifying hundreds of thousands of PDFs to different categories based on the content/first few pages. That is, if you have a pdf of a novel by Jeffrey Archer, it should be categorized as Entertainment or Novel etc. If you have a e-book of say Python for Dummies, it should be categorized as Engineering or Technology or Education or Programming and the like.


What's the risk of loss/insurance coverage situation for the items you ship? ML is good for digesting several input variables and assessing the risk of X happening. You might be able to pair that with an up-sale of insurance coverage to the client to generate additional income or use it for cases where the company self-insuring certain shipments might be an effective way to save money for little risk. Depending on how your company typically deals with insurance of course.


We should talk! I do work on automatically coding products for a shipping survey at the Census Bureau. One of the earliest production uses of ML here at Census :)

5 minute deck: https://github.com/codingitforward/cdfdemoday2018/blob/maste...

Feel free to shoot me a message.


I'll put out another (basic) idea that isn't listed below.

If you have access to production logs and metrics, try to model things like page load times, server load, network latency, errors/timeout, number of page views/unique visitors.

You might hit on unexpected correlations and maybe unknown bugs (e.g. when page X is loaded with input Y, timeouts and server load increase because of a broken SQL request), or insights on the health of the production platform.


Ask the product and strategic analytics folks what statistical models they're already using, then identify gaps between that and what would be required to add (additional) value by making use of ML techniques.

Also consider that ML simply isn't a useful tool for your company. "Let's not spend (more) time trying various ML techniques" is a perfectly valid and useful outcome of an experiment using ML.


It would not be incorporating ML into your app's code base directly but one way that I incorporate ML is to use it to analyze application performance data in hopes of learning how to improve it.

http://glennengstrand.info/software/architecture/msa/ml


I am facing the exact same scenario in the railroad industry. I have the idea, the data, I know how to build the model, train it but my biggest challenge would be delivering it and actually making use of it.

In the end, is it really worth all that effort if you end up using Cognitive Services on Azure or ML on AWS? How to you deploy your model and actually USE it in a WPF CRUD app?


Reminded me of this link I saw here earlier: https://news.ycombinator.com/item?id=17433752

It shows a link to an article called: "No, you don't need ML/AI. You need SQL"


"I can't come up with any problems that I couldn't solve with a relational database (postgres) and a data transformation step"

do it anyway with Machine Learning and see how it goes. at least you will know the expected results.


That will only make those systems hard to understand for other developers and the company isn’t in the business of entertaining a single developers ML fantasies.

I get the machine learning is interesting to some people, and it certainly has its place, but applying needlessly complex solutions to simple problems is how software developers get a bad reputation.

If you don’t have the need don’t use: ML / AI / Cloud computing / Hadoop / or any other random technology.


On the other hand, if you don’t build something with it, it’s hard to tell when to apply the tool.

I personally have to build some things that turn out to be the wrong approach so I really understand how to build the right thing later.

Some small problem that could be replaced is a great opportunity to really understand the solution space ml offers.

It really boils down to how important people are to your organization. There is a spectrum between cog and irreplaceable, sometimes growing people is totally worth the cost.


Exactly. Trying out new things in small projects prepares you for bigger things and it goes wrong nothing is list. A lot of failed projects come from jumping into something new with one big step.


If it's a relatively simple thing I think it's a good opportunity to try out something new and see how it goes. If it doesn't work out there is not much lost and if it works out you are prepared for bigger things. I think it can be a mistake to do new things only when you really need them.


As a hint, ML is about predicting/future. How about predicting any kind of operation volume (sales for e.g.) before happening?


In CRUD apps, suggestion for lookup values (based on previously entered fields) are usually good candidates for ML.


You have hit upon the dark secret of the industry. ML is a solution looking for a problem. It’s nowhere near as bad as blockchain mind, but ML and Kubernetes both have far more people eager to do them than there is actual work...

Having said that there must be a beancounter in your organisation whose job is making forecasts. That person wouldn’t hesitate to lay you off if the company hits a rough patch. I think you get my meaning.


> ML is a solution looking for a problem

I don't think it is true.

ML is legit. The dark secret is that a lot of it isn't new.

It is only now, that the data, compute and hype have caught up to how useful good-old applied statistics is.

Now a days, a lot of older areas have been repacked into a fancier term of ML and AI. This goes hand-in-hand with some legitimately awesome cutting edge research with wide spread uses (Smart phone cameras, Self driving, Language translators, Massive social media knowledge graphs).

Places such as MIT and CMU would not have started undergrad programs in AI, unless they though it was here to stay.


ML is great if you already have scale, data quality, and problem sets that match ML approaches.

But often the bulk of the work is getting or cleaning data to make ML work in the first place. And even after that, the results need to be better or cheaper than having an employee personally look at the same data sometimes (i.e. a low tech expert system).

If you have a thousand emails and you want an emotion score assigned to the contents, it's often cheaper to use temp workers or something like Mechanical Turk.

It's also worth noting that many interesting automation tasks don't have room for 99% confidence intervals. It's fine if Google flips it's search result rankings rarely. It's not OK to only miscalculate 1% of account balances. And predicting which balances are bad might be useful, but the bulk of the work is the boring enterprise work of getting the concerns in front of a human or something.

That's why ML seems like a solution in search of a problem. Unless your problem is getting people to click on ads.


ML in the hands of an individual with relevant educational credentials (e.g. statistics, science) is legit.


In so many ways, I disagree. There's an entire classes of algorithms that we've almost been conditioned as developers to avoid that are amenable to ML techniques. Whether it's ranking/filtering sets of data, where the best we've previously done is taking a SWAG at the characteristics of the data should be valued most highly or parsing unstructured data, learning to solve these classes of problems with ML expands our repertoire of what we can do as developers.

That doesn't mean that there aren't people applying ML techniques to problems that can easily be solved in other ways, but that happens with every hyped technology. And for developers learning these techniques, there's going to be a certain amount of applying them more widely than is justified until they learn them well enough to improve their judgment. That's okay and all part of the learning (no pun intended) process.

As far as the original poster is concerned, since he works at a shipping company, a problem as simple as "find the address on a webpage" could be a neat proof of concept to show his bosses. Imagine if they could offer customers the ability to "ship" to a URL. That kind of thing might be doable with regexes and such if you constrain the problem to only addresses in the US/Canada, but worldwide address formats are so diverse that ML really helps.


True story, 10 years ago I was doing linear regression on infra data to predict capacity issues. My system knew when it had made a prediction and when capacity increased so based on the lag it could tell you the exact date to place an order so more disks could be added to the array before a threshold was breached. And whatever else we were monitoring too, batch job completion times vs number of CPUs, the whole works. PCA to figure out what was relevant. Clever stuff you might think.

In reality no one cared. We went on placing a disk order whenever the first dumb nagios alert paged someone. We bought more CPUs when the moaning from the analysts reached the CEO and he in turn moaned to our manager. It was fun to do and I learnt alot but realistically the impact on the business was minimal...


don't.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: