Hacker News new | comments | show | ask | jobs | submit login

I work in manufacturing. We have an acoustic microscope that scans parts with the goal of identifying internal defects (typically particulate trapped in epoxy bonds). It's pretty hard to define what size/shape/position/number of particles is worthy of failing the device. Our final product test can tell us what product is "good" and "bad" based on electrical measurements, but that test can't be applied at the stage of assembly where we care to identify the defect.

I recently demonstrated a really simple bagged-decision tree model that "predicts" if the scanned part will go on to fail at downstream testing with ~95% certainty. I honestly don't have a whole lot of background in the realm of ML so it's entirely possible that I'm one of those dreaded types that are applying principles without full understanding of them (and yes I do actually feel quite guilty about it).

The results speak for themselves though - $1M/year scrap cost avoided (if the model is approved for production use) in just being able to tell earlier in the line when something has gone wrong. That's on one product, in one factory, in one company that has over 100 factories world-wide.

The experience has prompted me to go back to school to learn this stuff more formally. There is immense value to be found (or rather, waste to be avoided) using ML in complex manufacturing/supply-chain environments.




Ok there are some warning signs here.

First, bagged decision trees are a little hard to interpret; what is the advantage of a bagged model vs the plain trees? Are you using a simple majority vote for combination? What are the variances between the different bootstraps?

Second - what do you mean by 95% ? Do you mean that out of 99999 good parts 4999 are thrown away? and one bad one is picked out as bad ?

Third - what is this telling you about your process? Do you have a theory that has evolved from the stats that tells you why parts are failing? This is the real test for me.. If the ML is telling you where it is going wrong (even if it's unavoidable/too expensive to solve) then you've got something real.

Unfortunately my concern would be that as it stands.. you might find that in production your classifier doesn't perform as well as it did in test... My worry has been generated by the fact that this same thing has happened to me !

Several times...


It sounds like the OP is scanning for internal defects in bonds via impurities being trapped in there. These occur relatively randomly and there's some balancing point where it's just not worth trying to make the production line cleaner vs binning parts that fail some QA criteria. I do similar things with castings, where you simply just get certain voids and porosity in the steel when cast and either you can spend a tonne of money trying to eliminate them or you can spend less money testing the parts and binning those that aren't up to par.

I'd hazard to guess that the 95% is the reduction in how many parts made it through the first test only to be caught later at the more expensive stage. So instead of binning 100 parts a month at that second stage, they now bin 5 parts a month and catch way more early on.

It sounds like the OP is using ML to identify flaws that simply just occur due to imperfections in the manufacturing process. That's life, it happens. You can know that they will occur without necessarily being able to prevent them because maybe there's some dust or other particulates in the air that deposit into the resin occasionally, or maybe the resin begins to cure and leaves hard spots that form bond flaws. There's heaps of possible reasons. It sounds more like the ML is doing classification of 'this too much of a flaw in a local zone' vs 'this has some flaws but it's still good enough to pass', which is how we operate with casting defects. For example, castings have these things called SCRATA comparitor plates, where you literally look at an 'example' tactile plate, look at your cast item, then mentally decide on a purely qualtative aspect which grade of plate it matches. Here [1] are some bad black and white photos of the plates.

[1] http://www.iron-foundry.com/ASTM-A802-steel-castings-surface...


This is pretty spot on. We know why the defects happen and why they cause downstream test failures, but we lack the ability to prevent (all of) them.

To clarify on that 95% value because it is admittedly really vague: That's actually a 95% correct prediction rate. So far we get ~2.5% false-positives and ~2.5% false-negatives. 2.5% of the parts evaluated will be incorrectly allowed to continue and will subsequently fail downstream testing (no big deal). More importantly, 2.5% of parts evaluated will be wrongly identified as scrap by the model and tossed, but this still works out to be a massive cost savings because a lot of expensive material/labor is committed to the device before the downstream test.


I hope you get a decent chunk of those cost savings as a reward for your effort, great job.


D'Angelo Barksdale: Nigga, please. The man who invented them things? Just some sad-ass down at the basement at McDonald's, thinkin' up some shit to make some money for the real players.

Malik 'Poot' Carr: Naw, man, that ain't right.

D'Angelo Barksdale: Fuck "right." It ain't about right, it's about money. Now you think Ronald McDonald gonna go down in that basement and say, "Hey, Mista Nugget, you the bomb. We sellin' chicken faster than you can tear the bone out. So I'm gonna write my clowny-ass name on this fat-ass check for you"?

Wallace: Shit.

D'Angelo Barksdale: Man, the nigga who invented them things still workin' in the basement for regular wage, thinkin' up some shit to make the fries taste better or some shit like that. Believe.

[pause]

Wallace: Still had the idea, though.


Can't believe I'm seeing the Wire referenced on HN


Oh, a Wire reference on HN... my life is one step closer to completion.


> 2.5% of parts evaluated will be wrongly identified as scrap by the model and tossed

2.5% of what, though? if only 1 in a million parts are actually bad, you're still tossing many more good parts than bad parts.


Correct - as mentioned the cost savings still work out. The defect rate is around 30% (Nowhere close to 1 in a million).


This sounds like a pretty standard use of ML to me. No need to feel guilty, this stuff just isn't very difficult from a user's perspective, especially if you use the right libraries. It helps if you maintain a good bookkeeping of your experiments, so you have a good picture of what works and what doesn't.

By the way, control engineering for industry used to be very difficult (but is paid very well), and requires knowledge of systems theory, differential equations, and physics. But with the advent of ML, I suspect that might change; things may get a lot easier.


Care to elaborate?


I disagree with the above, but I think I can shed light on what they might mean. Usually, control theory (which is used in most manufacturing processes) requires quite a bit of background knowledge on the processes at hand along with fairly powerful (mathematical/physical) tools to both approximate and model such processes, along with creating systems that use these models to perform the desired task.

I believe that the parent post means that with current simulation-based tools and large amounts of data generated from manufacturing processes, one can work directly with abstract machine learning models instead of creating physical models or approximations thereof---thus being able to dispose of the mathematical baggage of optimization/control theory and work with a black-box, general approach.

I disagree since we have very few guarantees about machine learning algorithms relative to well-known control approximations with good bounds; additionally, I think it's quite dangerous to be toying with such models without extensive testing in industrial processes, which, to my knowledge is rarely done in most settings by experts, much less people only recently coming into the field. Conversely, you're forced to consistently think about these things in control theory, which I believe, makes it harder to screw up since the models are also highly interpretable and can be understood by people

This is definitely not the case in high-dimensional models: what is the 3rd edge connected to the 15th node in your 23rd layer of that 500-layer deep net mean? Is it screwing us over? Do we have optimality guarantees?


This is brilliant, would love to read a full write up on it. I hope you get a big raise.


If not, perhaps you should consider starting a company to develop this tech for others. Drop me a line :-)


Surely it would be guarded as a trade secret, as it usually happens in large companies.


Yup - To do a proper write-up that would actually be interesting to read would require divulging IP.


Nice! You might like these links too. "Machine Learning Meets Economics", uses manufacturing quality as an example.

http://blog.mldb.ai/blog/posts/2016/01/ml-meets-economics/ http://blog.mldb.ai/blog/posts/2016/04/ml-meets-economics2/


This is awesome - thank you! I went through a similar exercise described in your link in evaluating the utility of the tool I described above. This is a nice write up of the logic.

In my case the % occurrence of the defect was very high and the False-Positive cost is also very high so my tool could provide value without being too stellar of a model.


Disclaimer: I have no experience implementing any kind of ML.

How easy will it be to update your model if/when the downstream process changes?

At a previous job we had a process that relied heavily on visual inspection from employees. I often considered applying ML to certain inspection steps, but always figured it would be most useful for e.g. final inspection to avoid having to update the models frequently as the processes were updated.


That's an interesting concern that I hadn't considered if I'm understanding you correctly. I'm imagining you could have a situation where a downstream process change helps mitigate the effect of the upstream defect. In that situation your measure of what constitutes good and bad parts will need to change in the ML model.

I think I'm somewhat lucky in that with my product downstream processes are unlikely to change in a significant enough way to warrant "retraining" the model, but I guess that's probably the only way to handle that - retrain in the event of a significant process change. Our product stays fairly stable once it releases to production and the nature of the downstream processes is that they would have very little effect on the perceived severity of the defect at the final electrical test.


That's pretty awesome. What are you doing academically to lean this? Went somewhere for a masters?


I just started in UC Berkeley's MIDS program.

My only two misgivings about the program thus far: It is 1) pretty expensive and 2) geared towards working professionals rather than academics, but my employer is helping pay for a good chunk of the degree and I'm more interested in acquiring the skills and tools to go solve problems in industry as opposed to doing research.

Otherwise it has been great thus far. The program was attractive to me because it is somewhat marketed towards those that may not have a software background, but have problems in their industry that could benefit from a "proper" data science treatment. I've been referring to my application of the principles as "Six Sigma for the 21st Century" with managers/directors. I think the vast majority of HN would groan at that term, but it helps communicate the potential value to someone who has no technical background with software whatsoever (think old school manufacturing/operations types): Process improvement for environments with many variables that have practically unknowable inter-dependencies (as is the case with the project described in my original comment).


Interesting perspective. I work in Manufacturing and have created similar models in the past and I was in the MIDS program but I dropped out. Like you it was too expensive and had other misgivings as well.


Care to elaborate at all on those additional misgivings? One thing I could see is that the material might not be very mind-blowing to someone who already has a software background.


Hi, I work at a manufacturing startup on pretty much the same exact problem (reducing scrape rates and downtime). I'd love to pick your brain if you have a moment to chat :) my email is mykola@oden.io


So this is a prototype and not really added value yet.


I don't understand why this comment is unpopular since the GP is phrased in such a fashion that you only notice that they talking hypothetically if you read it carefully.

I don't think there's anything wrong the GP's achievement or post (it's all interesting stuff) but if something has not yet been implemented, it's worth nothing since there is "many a slip 'tween cup and the lip"


Totally fair - I developed the tool for a product that won't be released until early next year so the cost savings are estimates based on expected production volumes. Its performance in a lower volume development environment has been consistent, however.


I didn't downvote the parent, but I also don't consider it to be civil (which is requested in the HN Guidelines). There are a number of ways to point out that this is not yet in production, and many of them don't require the dismissive tone his comment struck.


Also in manufacturing, would be interested in hearing more about this for detecting early on before NCR's are raised down the line.




Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: