Hacker News new | past | comments | ask | show | jobs | submit login
BMW shares AI tools used in production (bmwblog.com)
321 points by saranshk on Dec 15, 2019 | hide | past | favorite | 38 comments

the YoloV3 (the model that these tools are designed to work with) paper is extremely funny and worth reading for anyone who hasn't.


Thank you! My favorite passage:

> YOLOv3 is a good detector. It’s fast, it’s accurate. It’s not as great on the COCO average AP between .5 and .95 IOU metric. But it’s very good on the old detection metric of .5 IOU. Why did we switch metrics anyway? The original COCO paper just has this cryptic sentence: “A full discussion of evaluation metrics will be added once the evaluation server is complete”. Russakovsky et al report that that humans have a hard time distinguishing an IOU of .3 from .5! “Training humans to visually inspect a bounding box with IOU of 0.3 and distinguish it from one with IOU 0.5 is surprisingly difficult.” [18] If humans have a hard time telling the difference, how much does it matter?

> But maybe a better question is: “What are we going to do with these detectors now that we have them?” A lot of the people doing this research are at Google and Facebook. I guess at least we know the technology is in good hands and definitely won’t be used to harvest your personal information and sell it to.... wait, you’re saying that’s exactly what it will be used for??

> Oh. Well the other people heavily funding vision research are the military and they’ve never done anything horrible like killing lots of people with new technology oh wait.....

> I have a lot of hope that most of the people using computer vision are just doing happy, good stuff with it, like counting the number of zebras in a national park [13], or tracking their cat as it wanders around their house [19]. But computer vision is already being put to questionable use and as researchers we have a responsibility to at least consider the harm our work might be doing and think ofways to mitigate it. We owe the world that much. In closing, do not @ me. (Because I finally quit Twitter).

> 1 The author is funded by the Office of Naval Research and Google.

For people who don't get how the last statement ties in: on the actual paper, the third line 'iudqnolq quotes has a superset 1 on it. The PDF doesn't allow you to highlight the 1, so the punchline isn't as strong.

I didn't notice I missed that. Thanks

Lost it at page 4 figure 3 caption:

> You can tell YOLOv3 is good because it’s very high and far to the left. Can you cite your own paper? Guess who’s going to try, this guy→[16]

This guy cites.

I lost it at citation 1, the Wikipedia page for analogy

Incredible. I wish people would take themselves less seriously like this. I would never have read that paper and learned a bit about that space if it hadn't been such an engaging read.

I love that they include a "Things We Tried That Didn’t Work" section.

At first I was a bit put down by the bloggish tone but it does not obfuscate nor impair communication of information, so yes, pretty good paper!

Thank you—this is priceless. Caption to fig 4: “...and we can still screw with the variables to make ourselves look good!“

As an academic, I’ve now seen the light!

Mind blown, thanks.

What genre of papers does this belong to? I want to read more.

Quite a few ML papers are written this way. See e.g. Single Headed Attention RNN

Is there a github for the results cited in this paper? If not then I have one confusion despite the paper being awesomely written. We need more like this. I dislike the lifeless way the new research is communicated. I have to force to read them but not to this so much.

Now coming to the question- Is the input size and output size same that is why box top-left coordinated (bx, by) prediction corresponds to offset(cx) + prediction (sigma(tx)) and similar for y?

Thanks for the chuckles.

Algorithms doesn't seem to be the right term, this seems to be more tooling, like this wrapper for Tensorflow and YOLO training that runs a variety of monitoring tools for you:


OK, we've s/algorithms/tools/'d the above.

Well they are the algorithms that are they are using, whether they built them or not

They have simply checked in the libdarknet.so - quite overselling on the algorithm part in the blog.

This is impressive only because they are in automotive. I worked in that industry for years, they are generally at least a decade behind the rest of the software world

The tech in this blog post isn't at all representative of BMW at large.

There's a ton of 'decade behind' stuff that runs in VMs on Win10 machines.

Tesla isn't but this is one of the biggest problems I have with Tesla that I don't hear many people talking about. Tesla is more like a laptop with wheels than a car with a computer. That's fine but I'm not going to upgrade my car on the same schedule that I upgrade my laptop. It would be nice if Tesla allowed you to buy a new "shell" and transfer over your old battery pack. I wonder what the cost breakdown between battery pack and everything else is.

$15,000 for a replacement battery pack for the model 3, so manufacture cost will be a bit under that.

A decade behind would mean their software security is at Windows 7 level at least. It's far from that. More like Windows XP first edition at best.

Here's a nice video of the YoloV3 (which these tools are using) in action from the author of YoloV3 (Joseph Redmon): https://www.youtube.com/watch?v=MPU2HistivI

sorry for the negative vibes, but this looks a lot like some fake buzz to create some credible BMW-traineed profiles – none of the profiles linked with the project is older than 5 days...

At least on using the age of git/GitHub repos to determine the legitimacy of a project/effort: I would say it's not uncommon for some groups to time the release of their code with the publication of some announcement of it. I'd also say it's not unusual to adjust (for example, collapse) the git repo history when publishing code.

As a MechE this is really cool to see. Is there similar tooling available online that I can read about?

I sure hope they don't let people's lives depend on the output of deep learning models.

Why not? Deep learning models are very promising for certain autonomous driving tasks. Take a look at this survey paper for examples:


Yes, I agree it’s mostly tools...

This is great. It would be good if all life-critical software was open-sourced like this. Maybe something that should in fact be required?

But, ”in turn, we receive support in taking our AI software to the next level of development”

I bet they are mainly looking for help. They are trying to figure out why their software is not capable to do what it is supposed to, and hoping someone can assist.

I’m ready to consult them on this if they are open to hear the bad news first...

> life-critical software

The article could be clearer on this, but when it refers to "production", I think it means factories and logistics. (At least that's how I put together "production" in the headline with "implementing next-level production processes throughout its plants" from the first sentence.)

In other words, I think they use these algorithms in manufacturing systems, and they aren't putting this software into the cars' computers.

Looks like you’re right on this. Probably wishful thinking on my part that at least one manufacturer would be doing the right thing wrt autonomous vehicles.

It's object tracking and classification, so imagine a 2 ton robot arm swinging around a factory and failing to detect a person in the way.

Factory floors will typically have a light-curtain around things like 2 ton robot arms...

I think having the source available for inspection is a very good goal, but there is the issue of "we invested millions into it and now anybody can copy it" which would be the de-facto result even if they retained copyright on it.

You’re saying I should be forced as a developer to share the code of everything I do? That sounds pretty despotic, tbh.

I agree. But that’s not what I meant to say: only for life-critical software (think autonomous vehicles, airplanes, medical robots and devices).

Why not? If a person perceives proprietary software to be harmful, as a lot of people do, then it only follows there should be laws against it. Or, alternatively, if they see it like they do speech, others should be able to decompile it, comment, and redistribute it without consequence.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact