Hacker News new | past | comments | ask | show | jobs | submit login

Why not have redundant sensors and crosscheck them?

Boeing 737 Max had only one AOA sensor [1], and that wasn't a great idea.

https://www.cnn.com/2019/04/30/politics/boeing-sensor-737-ma...




>Boeing 737 Max had only one AOA sensor

Just a small nit-pick but it makes the case against Boeing worse. The airframe had multiple AOA sensors but the base software only used one sensor reading. Note the image in [1] shows readings from both a "left" and "right" AOA. From your link:

>software design for relying on data from a single AOA sensor

Boeing sold a software upgrade to read both AOA devices. (This still leaves the problem that if the two AOAs disagree there might be cases where you don't know which is bad). The fact that they listed MCAS as 'hazardous' rather than 'catastrophic' means it was allowed to have a single point of failure. It also means they may not have fully understood their own design.[1]

[1] https://www.seattletimes.com/business/boeing-aerospace/black...


The high level problem was the whole mess that Boeing and the FAA forced themselves into by lying to pilots about the plane being like the old plane. This motivated Boeing to solve everything in a sort of clandestine way, which meant there was no proper error handling, and Boeing simply covered their asses by pointing to the runaway stabilizer checklist, which they claimed was sufficient to handle MCAS problems.

This should never have been greenlighted, approved, manufactured, designed. This went against a culture of safety (least surprise, etc) at every level.


It's the unfortunate case that they succumbed to schedule pressure. This is similar to the Space Shuttle Challenger decision that led to catastrophe.

I think "lying" may be too strong of a word, but they were certainly incentivized to believe it was just a modification of an existing airframe. If I remember correctly, Boeing was essentially told by an airline that if they don't come up with a design quickly the airline would instead move their business to airbus. Certifying a modification to an existing design is much faster than certifying a brand new design, so every management decision was made with that lens. Add to it the political side where the competitor is a non-US company and it gets more complicated. Add to that the FAA is likely too understaffed to provide adequate oversight so they instead delegate it to the manufacturer.

This will be another case study in misaligned incentives and the tradeoffs of engineering decisions under uncertainty of both business and design risk.


That's the entire problem. (At least this the view I ended up adopting, and thus my argument.)

It goes eyes-shut-closed full-throttle balls-to-the-walls against safety. You don't rush nuclear power plants, why are we doing it with planes? (And yes we should use reuse already approved, known, standardized - modular, components. But they should be indistinguishably the same, not "we certified it the same by introducing so much complexity, so many new failure modes, that we have to hide those".)

And if safety costs a lot, then we can work on improving our processes (design, safety evaluation), but the process itself should lead to better stuff, not probably-not-that-worse but marginally cheaper so we can keep our semi-vendor-locked-in clients.

I'm aware of the Boeing-Airbus US-EU cockfight, and that as usual a coordination problem is at the root of this. But the way forward should be a healthier market, more competition, not a monoculture. (Or duo.) Or, if costs force us into such a pathological state of the airplane market, then we should treat it as such. (It should be viewed as one big problem. The total lack of healthy competition must be factored into the approval process. And a high-level Market Authority should step in to prevent regulatory capture and the usual - almost guaranteed to happen - shenanigans.)


I completely agree. My concern is that the problem is rooted in human psychology rather than process control, which makes it much harder to fix.

My experience in aerospace is that the industry swings back and forth on this like a pendulum. A bad event happens and then there’s a focus on safety improvements. Then years or decades go by and no bad event happens. Rather than attribute this increased safety to the process changes, those very same changes are looked at as a burden? Why are we spending all this money and time on quality and safety checks, people say. It’s killing our schedule and budget, they conclude. So they work to erode those processes until another adverse event happens, albeit of a slightly different flavor. Rinse and repeat.


Right, so what happens when one says wall, and one says clear air? You brake? That is what happens now with phantom braking. Approaching a bridge? camera sees air, radar sees wall, slam on brakes.

You want the best sensor type over a high-fidelity sensor and a lower fidelity sensor. The Tesla system has 8 cameras (3 forward), so they def have overlap between what they are considering better cameras.

Time will prove which approach wins.


If you have redundant AOA sensors on a plane and they disagree what do you do? Alert the pilot. You have to do the same on a self-driving car as well. You can't just ignore a serious malfunction, or pretend to not see it just because you don't know to handle it!

To be truly redundant you have to use different technologies, such as camera and lidar.


> If you have redundant AOA sensors on a plane and they disagree what do you do? Alert the pilot.

Right, which means it's not a solution for L4/L5 autonomy, only for L2. Tesla is trying to reach L4/L5, so just alerting the pilot is not satisfying the design goal.

> To be truly redundant you have to use different technologies, such as camera and lidar.

I think that is an opinion and not a fact. Watch a video such as

https://www.youtube.com/watch?v=eOL_rCK59ZI&t=28286s

from someone working on this problem


> Tesla is trying to reach L4/L5, so just alerting the pilot is not satisfying the design goal.

Neither is ignoring it. If a product can't meet the design goals under certain circumstances should it ignore it, not even look for it, or alert the user that there is a catastrophic failure?

> I think that is an opinion and not a fact.

I think it is more common sense than anything else.


How about they reach lvl 4/5 performance and then they can turn off the lidar then. Because right now, the car is steering into plain view obstacles. That's what they need to worry about, not whether stopping the crashes now goes against their lvl 4 philosophy.


>To be truly redundant you have to use different technologies, such as camera and lidar.

This isn't necessarily true. From a reliability engineering perspective it depends on the modes of failure and the probability of each mode. If the probability of an AOA failure is low enough, you can reach your designed risk level by having two identical and redundant components. It all comes down the level of acceptable risk.


>That is what happens now with phantom braking.

Uber seemed to have programmed an artificial delay when the system got confused. There's a good breakdown of the timeline showing how the system kept misclassifying the bicyclist who was killed, but I couldn't immediately find it. That breakdown shows their strategy at implementing delays in the decision process. According to the NTSB report[1]:

>"According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior"

When I read that in the context of the programmed delays it seems to indicate "we wanted to avoid nuisance braking so we put in a delay when the system was confused." As someone who used to work in safety-critical software, it blows my mind that you would deliberately hobble one of your main risk mitigations because your system gets confused. While TSLA may be focusing on a platform that gets better data with better sensors, they still need to translate it to better decisions.

[1] https://www.ntsb.gov/investigations/AccidentReports/Reports/...


> "we wanted to avoid nuisance braking so we put in a delay when the system was confused." As someone who used to work in safety-critical software, it blows my mind that you would deliberately hobble one of your main risk mitigations because your system gets confused

Maybe it was put in place to avoid erratic braking in absence of obstacle, in order to avoid getting hit in the rear by other vehicles (whose driver wouldn't see any obstacle and be unprepared for the Tesla car braking).


That's a good point and might have been their rationale, but I would argue it wasn't a very good risk mitigation because while they reduced the risk in one area (being rear ended) they increased their risk elsewhere. Worse yet, it increased the risk in an area more prone to higher severity incidents (e.g., hitting pedestrians - I assume - carries a higher severity than being rear-ended)


There’s a big risk of spine injuries, and a rear end collision might not activate the airbag. Not a simple trade off.


I’m not saying there’s no risk. I’m saying the risk is lower. The same hazards you mentioned are also present to the pedestrian, plus the risk of TBI or death.

The way this analysis is done is that a hazard gets assigned a risk score based severity x probability. I’m saying the severity of hitting a pedestrian is higher. Put another way, if you had to choose between being struck as a pedestrian or rear ended as a driver, which would you choose?


The post I was responding to said lidar, not radar. But if you want to switch to radar, we can talk about that too.

> camera sees air, radar sees wall, slam on brakes

Seeing bridges as walls is not a fundamental property of radar. That's an implementation problem. If cars are doing radar poorly, maybe the fix is to start doing it less poorly instead of throwing it away entirely.


Well, that was specifically about blending two different sensors with different characteristics. For you walking, it would be like blending your eyes with your nose. If your eyes tells you the floor is safe, and your nose smells something bad, do you stop? Anytime you have two different sensors with different characteristics, you want "the best". Your body uses your eyes to see where to walk, and your nose to test if pizza is rotten. Blending multiple sensor types is tricky.

So back to LIDAR.. same difference. Camera and LIDAR have different profiles. I think it's fine to use either, but I think trying to blend the two is a sub-optimal solution to the problem.

Again, this is my guess from what I know. I could be wrong, and the winning technology could use 12 different sensors (vision + radar + lidar + smell + microphones), and blend them all to drive. Cool if someone pulls it off! But if I had to do it myself or place a bet, I would put it on a single sensor type.


Unfortunately your example is inapt because the center of your vision and your peripheral vision may as well be entirely separate systems that don't overlap, and the way brains apply focus doesn't translate to how cameras work. Your scenario is closer to asking about the center of your focus saying the path in front of you is clear and your peripheral vision detecting inbound motion from the side. Peripheral motion detection overrides the clear forward view, but it's because they aren't trying to record the same information.

Here's why:

> If your eyes tells you the floor is safe, and your nose smells something bad, do you stop?

Absolutely, yes, if the bad smell smells like dog shit or vomit or something else that I definitely don't want to step in. If I'm walking, I'm very unlikely to be looking directly at my feet and much more likely to be looking ahead to do predictive path planning. I definitely do stop at least transiently in your scenario and then apply extra visual focus to the ground right in front of my feet so that I don't step on dog shit. The center of my vision is great at discerning details, but peripheral vision is terrible for that.

Anyway, the obvious answer to your inquiry based on my explanation here is to use confidence weighting and adaptive focus. If I think something might be happening somewhere, I focus my available resources directly at the problem.


This. When sensors disagree, you need to proceed with caution by taking a defensive stance and seek out more information. If a self-driving car can’t do that, then it can’t self drive really. I’m bearish on self driving, my uninformed take is that it is something that will require very near general AI to pull off acceptably. So it’s kind of silly, again in my totally uninformed opinion, to focus on using our current methods to do self driving. The focus needs to be on pushing toward general AI first, then just ask it to drive.


> If your eyes tells you the floor is safe, and your nose smells something bad, do you stop?

If my eyes tell me the floor is safe, but my ears hear it creaking and my feet feel it sagging I'm going to stop in my tracks/back up. If Tesla can't handle correlating multiple different types of sensors they aren't ever replacing a human driver.


> If your eyes tells you the floor is safe, and your nose smells something bad

well, if I'm smelling gas, I know the situation isn't safe… (and thus my nose is giving me an information my eyes might not have detected)


> but I think trying to blend the two is a sub-optimal solution to the problem

Research over the last decade has shown that LiDAR/Vision fusion outperforms Vision Only.

Can you explain the science behind your position ?


sensor fusion is used in a lot of places in robotics (and rockets!).

and yes, your percept is constructed from all functioning senses. what you look at and where your attention lands is often directed by your auditory system which can locate things in 3d space.


> Approaching a bridge? camera sees air, radar sees wall, slam on brakes.

That's simplifying the situation a bit too much. The camera can give more results than air/not-air. Specifically in this case it could detect a bridge.

Same applies to the radar really - you'll get measurements from multiple heights which would tell you that it may be an inclined street, not a wall.


This is where pre-mapped roadways help. Just an internal GPS database sent down to your car that says "Hey, at this GPS coordinate, there's a bridge here. Here's how to navigate over it." Everywhere else the cars can use radar+camera. GM (SuperCruise) and Ford (BlueCruise) do this today.


I think you are missing parts. Have you watched this video from someone actually working in the field?

https://www.youtube.com/watch?v=eOL_rCK59ZI&t=28286s


No great reason not to _plan_ to use them. I mean, lidar is still kinda not great right now, but I'm sure it will be great at some point. But they could already be doing better with just cameras than they're currently doing, so why not fix that?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: