Hacker News new | past | comments | ask | show | jobs | submit login

Heres what really happened:

Tesla decided they wanted 'birds eye view', which most of their competitors had. To get that, they needed more cameras and more camera inputs on their autopilot computer. It would be a big redesign of many features of the car. With birds eye view, parking sensors aren't really needed anymore, so they didn't place an order for new sensors.

However... The big new birds eye view feature gets delayed by ~ a year, because it depends on new autopilot silicon (HW4).

Tesla is now stuck - they don't have the sensors to make the old version, the sensors themselves are EOL by the manufacturer, and the new version isn't ready yet.

So - the CEO takes the fall, and announces that this was the plan all along. Software would replace the sensors.

That kept people happy for months. But the software team was pulling their hair out - the camera placement on the old HW3 cars was insufficient to see things very near the vehicle needed for parking, so it was never going to work well.

And the autopilot software team is being pulled in a lot of directions, and hasn't really met expectations on any targets lately - although perhaps they're trying to achieve the unachievable.

Given the original screwup (not having a backup plan for when this big new feature was delayed by a year), I think they made the best of a bad situation. Their approach of 'promise the impossible, and then underdeliver' will hurt the brand, but probably less so than any other approach.




> With birds eye view, parking sensors aren't really needed anymore, so they didn't place an order for new sensors.

Sorry but that's a large logical chasm you just casually leapt. How does bird's eye view magically determine the distance to a textureless wall when parallax fails to detect it?

Why do cars with birds eye view still come equipped with parking sensors if they are obviated?


Was just about to post this, glad someone else did.

I have a car with both (birds eye view and parking sensors), and they are absolutely complimentary to each other, and I use them for different reasons. E.g. for squeezing through my garage door (I only have about 1.5 inch of clearance on each side) I like the birds eye view, but then when I pull up to the front of the garage I use the sensor because I know exactly how many inches from the front my car should be so that I can both walk around the front and the back if I need to unload stuff from the trunk.


How do you open the car door inside the garage?!


By opening the door to the living room, opening the car door in the free space that has been cleared by opening the living room door. easy peasy : https://www.youtube.com/watch?v=3-MaC9fFtz0

edit : English subtitles : https://www.youtube.com/watch?v=q4jeN2QO1Gs


I love how at the end he puts two chairs in front of the door, as if otherwise it would be too easy


lol, never saw that before, the last step of pushing the car to close its door is just a beauty


At a guess, the garage is wider than its door.


Correct.


They exit via the sunroof.


What it comes down to is a strong, dogmatic belief at Tesla that vision will solve all problems. It's not aligned with reality unfortunately


Humans have top-notch visual/spatial reasoning systems and we still bang our shins and elbows on stuff. We shouldn't be removing proximity sensors from cars, we should be installing them on people!


To that point, humans have a whole proprioception system (as well as the direct sense of touch) and auditory system to help feed that spatial reasoning system.


A strong dogmatic vision that seems to come entirely from a desire for cost reduction rather than a concern for safety or functionality.


The set of parking sensors costs what? 100 USD? On a car that sells for upwards of $30-40k?

That's a pretty ridiculous penny pinching.


> That's a pretty ridiculous penny pinching.

Ford is eliminating AM radio from its entire line (except where contractually required), saving, I suppose, less than $10/vehicle. They’d rather deal with the bad press to save a few bucks from their BOM.


This may be marketing lies, but I thought that was due to AM radio not working well on electric vehicles [0].

[0]: https://www.nytimes.com/2022/12/10/business/media/am-radio-c...


If you can’t use an AM radio in an electric vehicle your vehicle is violating FCC regulations.


I don't think the FCC cares about interference unless it affects other people.

Look at TVs and computer monitors for example. I've got a Samsung SyncMaster T240 monitor that blasts out annoying noise all across the 2m ham band (and nearby police and fire bands). I've got to turn it off if I want to usefully scan those bands with a radio that is within a meter of the monitor.

Yet it passed FCC certification. Yes, it is noisy, but even if I were in a small apartment it would not be noisy enough to interfere with someone in another apartment. It just is a problem near the monitor.

Not all TVs and monitors are noisy. When playing with an RTL-SDR on my 2017 iMac I've never found any significant noise coming from the iMac for example.

Unfortunately this is something that reviewers don't seem to ever test.


Almost every EV now comes without an AM radio because of the interference. The original tesla model S had an am radio, but now they dropped them. It's apparently not a requirement any more. They must have some some special work to somehow eliminate the interference of the electric drivetrain.


Worth noting there's other savings associated with reducing complexity; operational, supply, QA, etc. But yeah a car that's worth that much I think should certainly have them.


I’d pay extra if they remove FM too. My previous car always blasted the radio when started. The radio is either white noise or people talking or music that isn’t mine… not sure which is worse.


Throwback to a past where you cloud remove the audio player and replace it with one you bought, all for not that expensive.

Modern cars, like all tech, are losing customizability and repairability in favor of slick designs and vendor control. I just hope the car-equivalent of tower PCs never fully go away.


Yes in those days car stereos were frequently stolen. The entertainment system is so integrated these days you'd have to steal the entire car.


Almost every component of your car is the result of absolutely insane cost cutting and overworked suppliers. Car companies are notorious for putting enormous pressure on subcontractors to reduce prices at all costs.


On a podcast (Lex Fidman I think), the former head of the AI group seemed to say that Tesla would rather focus all of its resources on vision instead of using some resources on researching, specifying, ordering and calibrating sensors.


$100 here, $100 there, after a while you're talking about real money!

I guess it helps explain why Tesla's margins per car are so huge.


Munro & Associates estimates the cost of the sensor plus installation to be $8 so $96 across the car, and the rest of the cost for the wiring totals to $114[0] all including cost to install them.

From sales alone, ie. 420k a quarter or 1.6 million a year[1], they save $180M in a year and Munro estimates $100k/yr savings in removing them from their inventory.

0: https://youtu.be/LS3Vk0NPFDE?t=276

1: https://ir.tesla.com/press-release/tesla-vehicle-production-...


Mercedes and BMW couldn't get away with it because they sell on premium and German engineering.


Well the whole VW cheating disaster was prompted by trying to save a few $ in parts IIRC


Nah, that was much more defensible from a penny pinching standpoint since they would’ve had to reengineer their vehicles for larger DEF tanks, would’ve had to represent lower fleet mileage, etc.

You may have been remembering the GM ignition crisis which was exactly that — engineering team shipped inadequate springs at a potential cost savings of pennies per vehicle that allows the cars to turn off while moving.


Parking sensors are handy if you're looking to the side or back when reversing, since you can hear them without looking at the screen in front. I honestly don't know where I'm supposed to look when backing up sometimes — it feels irresponsible to just look forward at a camera display, but in most cases that shows more than looking all around would. Parking sensors are nice because you can hear them chirp no matter where you look.


>honestly don't know where I'm supposed to look when backing up sometimes

Even with the addition of backup cameras I still look all around and check blind spots as if I didn't have a camera. No backup camera I've used shows enough information to be confident enough otherwise.

Unless ya'll have some truly high end cars I don't see how anyone can confidently back up with just a camera and not fear hitting objects/children etc.

I never even thought about my habit until a friend who is very terrified of riding in cars told me they were made comfortable by how much I look around me while driving. Which just convinces me further that most people shouldn't be on the road.


> Unless ya'll have some truly high end cars I don't see how anyone can confidently back up with just a camera and not fear hitting objects/children etc.

If I look backward I wouldn't see a small child right behind my vehicle. The parking sensors and backup camera would, though. That's why I feel like it's weird to look backward instead of looking at the video feed from the wide-angle camera.


Which is where the utility of backup cameras comes from. It's nearly impossible, at least impractical, to design a car that holds human beings with a rear view that could provide the same field of view as a camera. Even if the back of the car was a giant pane of glass just having a trunk or back seat will occult the driver's rear view such they could miss a curb, pet, or small child. Even a shitty camera will give a better view than most cars' actual rear view from the driver's seat. They're even more useful when you consider they better enable shorter or taller drivers that don't have the ideal view out the back of the car.

I check blind spots I know my camera has but then use the camera to actually back up because it's got a far better view from the back of my car than I do from the front seat.


I mean, do both? Surely as a driver we should be using all the information we can get to drive safely. When I'm reversing I'll look around, get in position to reverse, and alternate between side mirrors, looking around, and the reversing cam.

Might be a side effect of how we're taught in the UK, but we were always told we should keep glancing in our mirrors whilst driving so it seems natural to me at least.


Turning my head quickly to pivot between the video screen and the back (and the back left) makes me feel like I'm being less safe than if I just looked straight at the video or possibly straight backward.


In most cases, I do an initial sweep, then use my camera. If I'm backing from between two vehicles, I check the camera, then turn around and keep an eye for anyone coming from either side. The fisheye on the backup camera helps a lot, but it's not going to show the idiot going 20 mph through the parking lot until it's too late.


I don't mean that I look backwards the whole time. I mainly look at the camera, but at the same time I'm glancing around checking mirrors and looking out of the windows for hazards. I've watched some people just stare down into a camera and then be surprised when they nearly hit a truck that was coming down the parking lot not yet visible in the camera.


That's why many cars include also a radar that detects and alerts you of approaching traffic from the sides before it comes into view - both camera and the driver (the blind spots when reversing are large).

E.g. Mazda 3 has that - if it detects a vehicle or even a pedestrian approaching from the side while reversing, it starts beeping and showing orange chevrons on the camera screen to alert me.

Obviously, this doesn't remove the need to look around and not rely only the sensors and cameras. The tech may fail and the ultimate responsibility is with the driver.


No, but you might see a child walking or running in from the side


My camera has a very wide angle, and is 10 feet behind where I am (and can see around bushes/cars that I can't). There are definitely times that I can see things the camera can't see, but I feel like the balance is that the camera can see more.


A friend of mine was backing in to an unfamiliar parking garage and was watching the screen and all of a sudden we hear a scrape. There was an air duct over hanging the car and we hit it. The camera didn't think to show the roof. Luckily for us we hit the duct before hitting the sprinkler. That would have been a disaster.


I nearly backed into a tree that managed to slip into the blind spot between stitched-together camera feeds on a rented BMW with 360-degree camera. I was only saved by the curb stopping me inches before impact with the tree.

tl;dr, look around in real life, too. I'm not sure why the parking sensors didn't notice the tree or the curb, to be honest.


> With birds eye view, parking sensors aren't really needed anymore, so they didn't place an order for new sensors.

Is OP saying that the proximity beeping would be done by the cameras (some sort of computational videography like an iPhone?) rather than the old sensor type or that the only answer would be visual? If the latter, that’s crazy - my truck has amazing cameras but if I am trying to back up with the sun low in the sky behind me, or the camera covered in dirt or snow, it is useless.


"parallax fails to detect it"

AFAIK, pretty much none of the driverless systems are using parallax. The reason why not is that parallax only gives good depth information at most up to about 10 feet. Disparity mapping is pretty limited in real life.

Instead they get depth from video.

I'd imagine in almost every situation the car would cast a shadow to that featureless wall. Although Tesla might not have a camera placed to a right spot to actually see that shadow...


> Instead they get depth from video.

Do you have a source for that? DfM is pretty much always a worse option than multi-camera reconstruction AFAIK.

> almost every situation the car would cast a shadow to that featureless wall.

I think "almost every situation" a pretty low bar for automotive safety.

I don't know too much about what people do in the automotive world, but in robotics it's pretty common to have an IR pattern projector if you really need depth on featureless surfaces.


While it does work for human eyes somehow, it is quite difficult in computer vision. It could give you precise information but you need to match two almost identical pictures via stereoscopy and that with a significant framerate and resolution. The advantage might be that you can have better conditions that a human skull provides, but the needed calculation power is quite significant.

I think this can only ever be reliable with a projection. A pulsed line laser or something that is synced with the camera. Then again, a simple ultrasound sensor might do a better job.

Otherwise the system might work, but is much more unstable in non-optimal conditions. Robustness is a pretty important feature for driverless systems. That said, stereoscopy would at least be better than only one camera.


Theoretically they would not be needed, but I agree that the mechanism of some ultrasound sensors is much more reliable. Given that they are as cheap as they are, I don't know why a car would leave them out in the first place given the margins of modern cars.


Yes, my EV6 has both, and they are just 2 different useful features.


It's a parking sensor. How many parking garages are textureless walls? Borrowing phrasing from the grandparent: "promise the impossible" may indeed be bad marketing, but "demand the needless and infeasible" is at least as much a problem here. Everything, everything about Tesla has to be some kind of existential argument about how This One Thing Proves Everything I Hate. And.. it's still just a parking sensor.

Meh. I have the sonar equipment in my car and it works nicely, and I'd view its removal as a mild loss. But it wouldn't have changed the purchase decision. It's a technical mistake, and at the margins will probably hurt them in sales.

But there's nothing here to justify the size of the thread we're spending on it. It's a parking sensor.


>Software would replace the sensors.

This is a fault in design philosophy. Remember MCAS, where Boeing decided their solution to a hardware design problem was a software workaround? I know it's tempting, but these types of design decisions need a very careful thought process to work; they shouldn't be a last-ditch workaround. I guess I give TSLA some slack here because it's only applied to parking, but I hope they don't apply the same design philosophy on more safety critical systems.


Lol… they are going all in on cameras so yes it’s going to be used in critical safety systems. No more LiDAR. Relying entirely on computer vision without sensor fusion is asinine imho.

https://www.forbes.com/sites/bradtempleton/2022/10/31/former...


> No more LiDAR.

Lidar is not in any cars. You're thinking about Radar, which they've added back with HW4. https://www.teslarati.com/tesla-hardware-4-hd-radar-first-lo...


MCAS wasn't to rectify a hardware electronics problem, it was for a dynamics/physics problem.

The equivalent would be if, say, the car was designed to be overweight on one side and susceptible to tipovers, and had software to automatically turn the wheels the other direction when it felt like it.


> it was for a dynamics/physics problem

Not exactly. The plane was (and still is) perfectly flyable without MCAS, but its handling characteristic were sufficiently different from the plane it was designed to replace that it required pilots to be retrained. That made its value proposition less attractive because of the extra training costs. MCAS was an attempt to make the new plane handle like the old one so pilots would not have to be retrained. And it failed catastrophically.

The point is, it was a completely artificial problem produced by business considerations, not physics.


> it was a completely artificial problem produced by business considerations

Making the plane behave the same as a different airplane makes it safer. There are many examples of crashes due to pilots reacting to an emergency in a manner appropriate to the previous plane they were flying, rather than the current one.

About 98% of the mass media reporting on MCAS was written by people who have no idea how airplanes work, and is hysterical nonsense.


Are you making the claim that MCAS made the plane behave similar to previously certified designs? Maybe that was the intent, but the execution seemed very much the opposite. That's largely the point: the software workarounds don't behave like a hardware-engineered mitigation.


The MCAS system is still on the 737MAX.

> the software workarounds don't behave like a hardware-engineered mitigation

You probably should never get on an Airbus, because they won't fly without computers.


I'm aware that MCAS is still in use. You may be conflating what I'm saying. I'm not making a claim that software is inherently dangerous. I'm making the claim that software as a workaround to sound risk mitigation is dangerous, especially when it's a workaround for a hardware problem because of the interaction effects. It's pretty clear from the hazard analysis and subsequent decisions that Boeing didn't understand and mitigate the MCAS risk effectively.

And, yes, I'm aware of the two philosophical camps regarding ultimate authority in command (pilot vs. software). That doesn't negate my point. Software, in either case, shouldn't be a workaround solely because it's easier to implement than a hardware change. Using it as an engineered mitigation is ok, but you have to actually implement the mitigations properly. For example, the hazard analysis listed MCAS as "critical". With that classification, it required redundant sensors, yet Boeing didn't make that the default and opted instead to sell it as an option. (Never mind the fact that the classification should have been higher, they didn't even follow through with their own processes based on the mis-classification).


> I'm making the claim that software as a workaround to sound risk mitigation is dangerous

Mechanical systems aren't inherently better or more resistant to hidden flaws. Remember it was a failure of the mechanical AOA sensor that initiated the MCAS failure.

> Software, in either case, shouldn't be a workaround solely because it's easier to implement than a hardware change.

Software runs our world now. Mechanical computers on airplanes were around for decades before software, and they were hardly free of fault and problems.


>Mechanical systems aren't inherently better or more resistant to hidden flaws.

Besides the fact that failure modes of mechanical systems are generally more understood than software, you're again conflating my point and having an entirely different conversation. This isn't about some "mechanical vs. software" dichotomy. I literally said there is nothing about software that makes it inherently dangerous and that using software as an engineered mitigation is fine if the mitigations are implemented properly.

What isn't fine is all the process and design gaps that occurred. Things like using software as a mitigation, not because it was the best alternative, but simply because it was cheaper/faster. Things like not following your own hazard analysis when it comes to mitigation. Or not characterizing the risk accurately or the failure modes because you didn't understand the system interactions.

>Remember it was a failure of the mechanical AOA sensor that initiated the MCAS failure.

This is exactly why, had they followed their own procedures and hazard analysis, redundant sensors would have been a default. A "critical" item (like MCAS in the hazard analysis) is supposed to get redundant input as a default.

My post isn't about mechanical vs. software. It's about the the allure of using software as a workaround that leads to process gaps and bad design philosophy. Like removing sensors because of a supplier/cost issue and assuming "we'll fix it with software" instead of doing the hard work to understand, characterize, and mitigate the risk effectively.


> Like removing sensors because of a supplier/cost issue

There were more than one AOA sensors. Not hooking the other one up to the MCAS system could hardly be a cost issue. Nor is it an issue of software vs hardware. It being implemented in software had nothing to do with MCAS's problems. The software was not buggy, nor was it a workaround. What was wrong was the specification of how the software should work.


I was bringing the context back to the article of this thread, not talking about Boeing there. Sorry if it lead to confusion.

You can note that I acknowledged there were multiple AOA sensors in other replies. Further, they were already “hooked up” to MCAS, but Boeing made this safety critical redundancy requirement an option within the software. That’s bad practice, full stop.

I still maintain that the software was 100% a mitigation to a hardware change and I think there’s plenty of other evidence supporting that. E.g., has they not updated their engines, would MCAS be installed? If the answer is no, then it was a mitigation to a risk introduced by a hardware change.

I’ll say it one more time just to be clear: I’m not saying the concept was bad. I’m saying their design philosophy and implementation was bad. They could have used software mitigation within the right process/philosophical framework just fine. What doesn’t work is using software as an “easy” risk mitigation strategy when you don’t understand the risk or the processes necessary to fully mitigate that risk. The problem arises because software is a seductive fix because it’s relatively easy and cheap on its surface, but if your design philosophy and processes are equipped to implement it effectively, that “easy” fix is rolling the dice.


> That’s bad practice, full stop.

No disagreement, there.

> it was a mitigation to a risk introduced by a hardware change

It wasn't really a risk. It was to make it behave the same.

Allow me to explain something. A jet airliner is full of hardware and software adjustments to the flying characteristics. For example, look at the wing. What do you think the flaps and slats are for? They are to completely change the shape of the wing, because a low speed wing is very very different from a high speed wing. There are also systems to prevent asymmetric flaps, as that would tear the wings off.

The very existence of the stab trim is to adjust the flying characteristics. The stab trim has an automatic travel limiter to constrain the travel as the speed increases because, you guessed it, full travel at high speed will rip the tail off.

The control columns are connected to a "feel computer" which pushes back on the stick to make the airplane feel in a consistent way from low speed to high speed. This feel computer can be mechanical or software. Pilots fly by the force feedback on the stick, not the travel of it. The idea is to make the airplane "feel" like a completely different airplane. Without the feel computer, they'd promptly rip the airplane apart.

There are plenty more of these. The MCAS concept is no different in any substantive way.

Your thesis that using software to run it as some unusual risk is simply dead wrong. What was wrong with the MCAS system was:

1. reliance on only one sensor

2. too much travel authority

3. it should have shut itself off if the pilot countermanded it

What was also wrong was:

a. Pilots did not use the stab trim cutoff switch like they were trained to

b. The EA pilots did not follow the Emergency Airworthiness Directive sent to them which described the two step process to counter MCAS runaway

There weren't any software bugs in MCAS. The software was implemented according to the specification. The specification for it was wrong, in points 1..3 above.

P.S. Mechanical/hydraulic computers have their own problems. Component wear, dirt, water getting in it and freezing, jamming, poor maintenance, temperature effects on their behavior, vibration affecting it, leaks, etc. Software does not have those problems. The rudder PCU valve on the 737 had a very weird hardover problem that took years to figure out. It turned out to be caused by thermal shock.


In various times in my past I've been a private pilot, airframe mechanic, flight-control-computer engineer, aerospace test & evaluation software quality engineer, and aerospace software safety manager. I've even worked with Boeing. So I am quite familiar with these concepts.

>It wasn't really a risk. It was to make it behave the same.

Hard disagree here. The fact that it did not behave the same and lead to mishaps shows there is a real risk. That risk could have been mitigated in various ways (e.g., engineering via hardware or software, administrative via training etc.) but they did not. You downplaying the risk as not credible is making the same mistake.

">There weren't any software bugs in MCAS. The software was implemented according to the specification.

I'm not claiming there were bugs. This seems to be a misattribution regarding how software fails. There are more failure modes than just "bugs". It can be built to spec but still wrong. This is the difference between verification and validation. Verification means "you built it right" (ie meets specs) while validation means "you built the right thing" (ie it does what we want). You need both and in this instance, there's a strong case they didn't "build the right thing" because their perspective was wrong.

>Your thesis that using software to run it as some unusual risk is simply dead wrong.

My thesis is that they didn't know how to effectively characterize the software risk because, as you point out, software risks are different than the risks of mechanical failure. Software doesn't wear-out or display time-variant hazard rates like mechanical systems. Rather, it incurs "interaction failures." The prevalence of these failures tends to grow exponentially as the number of systems that software touches increases. It's a network effect of using software to control and coordinate more and more processes and is distinct from buggy software failures. Which is why we need to shift our thinking away from the mechanical reliability paradigm when dealing with software risk. Nancy Leveson has some very accessible write-ups on this idea. There's nothing wrong with using software to mitigate risk, as long as you're actually characterizing that risk effectively. If I keep thinking about software reliability with the same hardware mentality you're displaying, I'll let all those risks fall through the cracks. They may have verified the software met specs, but I could also claim they didn't properly validate their software because you usually can't without understanding the larger systemic context in which it operates.

So what does that mean in the context of Boeing and, in a broader sense, Tesla? Boeing did not capture these interaction risks because they had an overly simplified idea of the risk and mitigations. They did not capture the total system interactions because they were myopically focused on the software/controls interface. They did not capture the software/sensor interface risk (even though their HA identified that risk and required redundant sensor input). They did not capture the software/human interface risk, which led to confusion in the cockpit. They thought it was a "simple fix". Tesla, likewise, is trying to mitigate one risk (supplier/cost risk) with software. TFA seems to implicate them in not appropriately characterizing the new risks they introduced with the new approach. I'm saying that is a result of a faulty design philosophy that downplays (or is ignorant of) those risks.


Neither will the Boeing 777 or 787!


The 757 was also engineered to behave the same way as the 767, to reduce costs in pilot training and improve safety.

The 757 has a stellar safety record.


It’s getting tiresome to say this, but you completely bypassed my point to talk about something different.

MCAS did not make the airframe operate the same in practice, especially from a human factors perspective. It confused pilots about how it was reacting. The plane acted very differently from previously certified designs and that was a major factor in the accidents.


This is just not true. What confused the pilots and caused the accidents was the MCAS was looking at only one faulty AOA sensor.


Sorry, but there are multiple reports that point to pilot confusion. What you're displaying is exactly what Boeing leadership showed: an oversimplification of the problem, leading to poor understanding of the risk and necessary mitigations.

MCAS had a built in delay; it wasn't a continuous command. It would push the nose down and periodically disengage and allow pilots to bring the nose back up. This type of intermittent feedback is difficult to resolve in real-time, especially for an untrained pilot under stress.

But to the larger point, what you're relating is actually bolstering my point. The design philosophy was poor; they didn't fully understand the interaction effects of the system (to include software, hardware, people, and the environment). In that simplified mental model, they thought software was an easy fix to their problem and they didn't follow through with the necessary risk mitigation. This includes having a redundant AOA sensor feed into MCAS (which their hazard analysis already required), characterizing MCAS properly as having the potential for causing a 'catastrophic' mishap, training for their pilots (which they didn't think was necessary because it was the 'same' airframe, despite different handling characteristics), and an appropriate understanding the human factors that govern its use.

If we erroneously simplify our mental model and claim it's "just software" and an "easy fix," we miss all of that.


Well, I'm a (private) pilot, so I like to think I have some clue about how airplanes work.

It's true that, all else being equal, making an airplane's handling characteristics the same as a familiar predecessor improves safety. But clearly all else was not equal here.


There was nothing wrong with the MCAS concept. What was wrong was its reliance on a single sensor, such that a bad sensor made it misbehave. Other things wrong with it were it had too much authority, and it should have disabled itself if the pilot was countermanding it with the control column.

The other thing wrong was the pilots not using the stab trim cutoff switch, which is supposed to be a "memory item" for them.

I worked on the design of the 757 stabilizer trim system. The cutoff switch was always the backup for things going wrong with the trim. It's right there on the console within easy reach for a damn good reason. (Other systems could be turned off by overhead circuit breakers, but the stab trim cutoff was placed in a special priority position.)

As for needing software at all, all jetliners have an active yaw damper to keep the pointy end forward. This is to counter a stability problem from having swept wings. Pilots of low&slow straight wing aircraft are often not familiar with this. A Cessna will be stable if you just let go of the controls. A swept wing jetliner, not without augmentation.

The mass media also omits the reason for the MAX. The new engines gave it 15% less fuel burn. This is massive cost savings (and less pollution, too.)


I fly an SR22, which has a yaw damper. It's not strictly necessary -- the SR22 is quite stable without it -- but I'm familiar with the concept.

The facts of the MCAS debacle have been litigated to death (literally!) and since you work in the industry you probably know more about them than I do. However, I'm still going to respectfully take issue with this:

> There was nothing wrong with the MCAS concept.

That's a vacuous claim because "the MCAS concept" is not well-defined. If "the MCAS concept" is something like "an automated control system that always does the Right Thing" then obviously there is nothing wrong with that. But MCAS was never an automated control system that always did the Right Thing. The problems with it were known -- indeed, self-evident -- long before anyone actually died. MCAS was explicitly designed to be an automated control system that sometimes did the Right Thing, and sometimes did the Wrong Thing (with a single point of failure), but that was OK because when it did the Wrong Thing, the human pilots would take over and do the Right Thing in its place [1]. I think it's pretty clear, even without the rather definitive evidence in the form of two lost aircraft, that there is quite a bit wrong with that concept. But maybe we'll just have to agree to disagree about that.

And, somewhat less respectfully...

> The new engines gave it 15% less fuel burn. This is massive cost savings (and less pollution, too.)

I'm sure the families of the victims will take great comfort knowing that their loved ones died efficiently.

---

[1] UPDATE: And, I might add, that they were expected to do the Right Thing without any additional training because the whole point of MCAS was to make the new plane behave like the old plane. Which is manifestly failed at rather spectacularly. But that is neither here nor there because I think the case can be made that the "MCAS concept" was flawed even without this little detail.


> I fly an SR22, which has a yaw damper. It's not strictly necessary

The SR22 is a straight wing aircraft, not a swept wing one. "Some aircraft, such as the Boeing 727 and Vickers VC10 airliners, are fitted with multiple yaw damper systems due to their operation having been deemed critical to flight safety.[1][4]" https://en.wikipedia.org/wiki/Yaw_damper

There have been crashes due to failure of the yaw damper and the pilot being unable to control the resulting instability.

https://en.wikipedia.org/wiki/Dutch_roll#Accidents

> I'm sure the families of the victims will take great comfort knowing that their loved ones died efficiently.

We should get rid of jet airliners entirely. The whole point of jetliners was to reduce operating costs. People have died because of many flaws in jetliners. Should have just stuck to the DC-3.


Yes, the cutoff would have worked had it been activated quickly enough. The failure is insidious though.

a) When MCAS falsely activates, it only does so briefly. Then, assuming the problem is that the AoA sensor is broken, it activates again every five seconds. This is very confusing as the problem appears to have resolved and then it comes back.

b) During initial climb, other parts of the speed trim system are normally working and the trim wheel is turning and clacking with no input from the pilot. If not for the big pitch down, the extra MCAS inputs would not be otherwise noticeable.

c) The pilots had no way of knowing that MCAS even existed, because all references to it were deleted from the flight manual, so they had no way of being ready for such a set of circumstances.

d) After MCAS has activated a couple of times, if you then recognize a problem and flip the trim cutoff, you lose the manual electric trim as well, and the trim is so nose down that you don't have the physical strength to turn the trim wheels. If you turn the trim cutoff back on to try to recover control, MCAS will hit you again.

e) The procedure on previous versions of the 737 for dealing with a plane so out of trim that you couldn't turn the wheel was to pitch down even more to unload the stabilizer and then crank the trim wheel, but this procedure was also deleted from the flight manual, I believe as of the 737 Classic series.


>The mass media also omits the reason for the MAX. The new engines gave it 15% less fuel burn. This is massive cost savings (and less pollution, too.)

Sure they did. It's all over the reporting. Just from a cursory search:

>Boeing gave the Max aircraft larger engines for greater fuel efficiency"[1]

>Consequently, improving fuel efficiency has emerged as one of the major bases of competition between airline manufacturers.[2]

>Airbus announced the A320neo, a more fuel-efficient version of the A320...Boeing had to choose between short-term gain and long-term pain. The simpler option was to refurbish the 737NG with a bigger, more fuel efficient engine.[3]

>Mistakes began nearly a decade ago when Boeing was caught flat-footed after its archrival Airbus announced a new fuel-efficient plane that threatened the company’s core business. It rushed the competing 737 Max to market as quickly as possible.[4]

>That threatened to change in 2010 when Airbus introduced a version of the 320 called the Neo (for “new engine option”) that offered large improvements in fuel efficiency, range and payload. The following year, American Airlines warned that it might abandon Boeing and buy hundreds of the new Airbus models. Boeing responded with a rush program to re-engineer the 737[5]

The real issue was leadership strategy and poor process control.

[1] https://www.scientificamerican.com/article/despite-similarit...

[2] https://www.vox.com/2019/4/5/18296646/boeing-737-max-mcas-so...

[3] https://dhruvmark.medium.com/lessons-in-product-management-f...

[4] https://www.theverge.com/2019/5/2/18518176/boeing-737-max-cr...

[5] https://www.nytimes.com/2019/09/18/magazine/boeing-737-max-c...


When I said hardware, I meant in the broader airframe hardware, not specifically electronics hardware. The physics you mention is a result of the center-of-gravity of the airframe design. The broader point being, it became cheaper and more expedient (at least superficially) to try and fix a hardware problem with software.

(Although Boeing did also create a cringey option to incorporate already existing redundant AOA sensors only as software option, rather than as a default)


> The physics you mention is a result of the center-of-gravity of the airframe design.

No, it's not. The larger size and more forward placement of the engines on the MAX caused it to pitch up in certain high angle-of-attack maneuvers, so MCAS was designed to automatically trim the aircraft to make it act more like the NG.


Yes, it is. But you don't have to take my word for it.

>"heavier engines that are also cantilevered further forward on the wing to provide more ground clearance. That changes the center of gravity."[1]

The "pitching up" is a direct result of the change in the center of gravity and thrust vector.

[1] https://www.seattletimes.com/business/boeing-aerospace/faa-e...


Therac 25 is another example of hardware being replaced by software and things goong terribly wrong.


I think there are some parking scenarios that might be catastrophic.

"backing into the pedestrian" is pretty bad, but what about fire hydrants or gas pumps?


Aren't fire hydrant valves usually a few feet underground, which makes running them down not as exciting as it could be?

Gas pumps (gasoline), at least in Canada, the pump is underground and you're really just holding a big nozzle. Bad, but not too bad. People have been driving into and through those for years.

Natural gas lines and meters could be a catastrophe, but they're usually pretty close to a wall.


Those don't move, so memorizing their location/occupancy should work better.


How much object persistence does this system actually have? Watching objects jitter around the screen and morph from a sign to a cone to a person back to a sign hasn't inspired confidence in me.


They did the same with rain sensor a few years ago. Result: auto rain wipe is still in beta in 2023!


> Remember MCAS, where Boeing decided their solution to a hardware design problem was a software workaround?

The problem was the software wasn't dual path. There was nothing wrong with the concept.


Not the concept, the implementation. That's why I'm referring to it as a flaw in design philosophy. The did not follow through with the appropriate mitigations for using software in a safety-critical application.


> Their approach of 'promise the impossible, and then underdeliver' will hurt the brand, but probably less so than any other approach.

I have worked for several other automakers and they plan platform changes 3-7 years out. I know that's not Tesla's style, but that long term planning usually prevents these cases of under delivering what you promised. That's because the marketing department isn't promising anything until engineering has it working on a mule and manufacturing operations has the parts on order for production.


>the CEO takes the fall, and announces that this was the plan all along. Software would replace the sensors

That's the opposite of taking the fall. That's shifting the blame to the software department.


The idea that bog standard 'birds eye view' would require HW4 is the saddest thing I've heard today. How powerful is the computer that Nissan puts in a rogue to provide that view, as compared with HW4?


The processing power probably isn't the limitation. It's probably hardware, like the number of hardware ports for the cameras are limited. These aren't webcams stuck into a USB hub. If they're physically out of pins, then hardware development will be required, either for some hacky, relatively dangerous (you would be switching out primary cameras), multiplex solution, or the addition of a computer and the support/supply chain nightmare that entails.


I'm guessing they mean more it's a limitation of where the cameras are placed rather than the processing power. HW4 slightly tweaks the camera positions and angles which I believe removes the current blindspots that make birds-eye impossible.


> The big new birds eye view feature gets delayed by ~ a year, because it depends on new autopilot silicon (HW4).

Why? Cars with orders of magnitude less processing power have that. Is that because of the lack of camera inputs?


They also said "the camera placement on the old HW3 cars was insufficient to see things very near the vehicle needed for parking, so it was never going to work well."

So it's probably more of that!


"Given the original screwup (not having a backup plan for when this big new feature was delayed by a year), I think they made the best of a bad situation."

Yes, by greed and arrogance they shot themselves in the foot. But the day after that happened, they had no more opportunities for greed so they dealt with things as well as they could and you can't criticize them for that, now can you?

The scenario of "we'll go from the old thing to the new thing, burn our bridges to the old thing and not have the thing ready" these days in a multitude of industries and isn't something that should be taken as OK by those who caught by it.


This is not accurate. Most cars that have 360/top down views still have parking sensors. Tesla has just been steadily removing features to increase profit margins.


I fail to see what works be inaccurate. That other car manufacturers are conservative has no bearing on it, it’s easily believable that Tesla would want a 360 view, and given their all in approach on vision would assume they would get rid of the seed for parking sensors (see: LiDAR). It’s also easily believable that they would go with that ahead of anything being ready because Tesla hubris and would end up where they got.


So in other words an amature failure in leadership. How many other things did they fail to consider or plan for?


Removing the wiper stalk. That decision killed one German driver who was busy fiddling in the menues to figure out how to get the wipers to max when hit by heavy sudden rain (and yeah their sensors don't work well, because they saved money by removing the industry standard IR sensor for rain and tried to rely on their front-facing camera).

They have changed it now so if you push the button to do a single manual wiper stroke, the wiper speed icons pop up on the left side on the screen so you can then proceed to set them to max, but it's still a ridiculous decision that is dangerous. I guess it doesn't rain in California so nobody in charge bothered :)

It's happened many times driving that I have for example overtaken a trailer that splashed an infinite amount of water on me while passing, and I had to hit the max-wiper just to see anything. A delay of a second there could mean a crash..


> They have changed it now so if you push the button to do a single manual wiper stroke, the wiper speed icons pop up on the left side on the screen so you can then proceed to set them to max,

When did this event happen? Since very early on in the Plaid I remember this popup wiper selection happening: https://youtu.be/gHTdsOImKmo


IIRC around the same time. Just tested it and scrolling doesn't work on the model 3, you still have to use the touch screen.


The common one is the lack of a rain sensor for the automatic wipers. It relies on image recognition, and it’s very bad. They could perhaps train the model a bit better, to perhaps not start the wipers at full speed when it’s a blue sky with a dry road. But the cameras are on top of the windshield, and it can’t easily water at the bottom.


> With birds eye view, parking sensors aren't really needed anymore, so they didn't place an order for new sensors.

I can see someone at Tesla making this determination if they never actually drove a competitor vehicle that has birds eye view and parking sensors.


> With birds eye view, parking sensors aren't really needed anymore, so they didn't place an order for new sensors.

I believe it was more that they couldn't get the sensors due to supply chain issues.


Can the cars sold without the sensors plausibly be retrofitted later by Tesla service? That would be a good gesture and solution but depends on how completely they removed it I suppose.


Based on investigation by GreenTheOnly it's unlikely a retrofit will be possible https://twitter.com/greentheonly/status/1625905186387505155?...


The cars without sensors could be retrofitted with sensors, but those sensors are no longer manufactured. You could maybe take some from scrapped cars, or design an entirely new but compatible sensor.


We are talking about car company thats worth 600 billion? Cant they just pay someone to make those sensors?


Only about 1/30th of that is cash. Still enough, but also, they can just not pay someone to make the sensors, as they're doing.


In a nutshell, this whole process is the reason two Tesla owners are without heads. I realize that has nothing to do with the parking sensors, but the real problem is the engineering process.

Tesla is single handedly responsible for setting the robotics industry back decades (pre 2007) with their capricious and glib attitude toward autonomous safety. Back then, its was completely unheard of for people to be testing 2-ton autonomous hardware in public on consumers. Enter Tesla and Elon Musk, and suddenly they are responsible for a general attitude that it's okay to just put these things into your community without any warnings, or laws, or safety protocols, or apparently even sensors.


Capitalists gonna capitalist, and unless laws or regulations prevent Tesla from testing their software in the public, they're going to march forward.

As for the accidents/deaths, responsibility needs to be shared between the owners as well as Tesla's marketing. Naming a feature "full self driving" when it's only level 2 autonomy is disingenuous at best. This may be a large part of the reason that people trust the system more than they should (fall asleep, stop paying attention, etc.)


I place blame first on Musk and the executives, but then on the engineers, and third on marketing. Because the engineers know, or should have known, better. They were educated to know better. I know some of those people and they are smart people. I know for a fact they know better. And yet two heads are without bodies.

They knew what the fix was after the first decapitation. Real sensors. Don't test on public roads. Don't test on consumers. That they didn't quit en masse after that is on them. That they went on to continue building and shipping the car without fixing the problem, leading to a second decapitation... there are no words. Malpractice doesn't cover it.

It's as if you were an engineer for a bridge which resulted in a collapse that caused loss of life due to the poor bridge design. If you go on to make that same bridge again... that's on you.


Regardless of their desire, it is very easy for the cameras on a Tesla in the rain to have water in front of the lens. This makes it very hard to see anything from the camera.

I dread how software trained on cameras working under normal conditions will interpret the resulting mess of an image.


This sounds plausible to me, but do you have any sources?


This still doesn't add up. The new HW4 Model 3 "highland" doesn't have more cameras. Notably there is a no front bumper camera. It also have one LESS camera in the windshield. The HW4 board DOES have space for more cameras though.


I have both birds eye and ultrasound sensors all round and I’d give up the Birds Eye first easily. And there is no way Tesla will completely replace the ultrasound distance sensing with software. Ultrasound sensors are cheap compared to making multiple cameras equally reliable for depth sensing which I imagine must require deicing and wash nozzles to work well. I need to rub my finger over the lenses of my side and rear cameras every day with rain or snow (So daily for at least half the year).


> the CEO takes the fall, and announces that this was the plan all along. Software would replace the sensors.

the CEO doesn't "take the fall", he __has__ the fall. He's the one deciding to gamble people safety and to hand to his engineers yet another unnecessary hot potato for sake of profit and share holders dividends.


If they can't source the older sensors why are the S and X still being manufactured with them?


They're not. March 2023 model X does not have sensors.


Ah recently then!


Who is the CEO, again? What fall did they take? Must’ve been big to avoid naming them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: