Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] Tesla FSD head to head against Mercedes Driver Assist on the same road [video] (youtube.com)
54 points by alexrustic 14 days ago | hide | past | favorite | 82 comments



The user that created that video is known to have preference for Tesla (based on Twitter posts and their YouTube channel is all about Tesla), the video may be biased.

The Consumer Reports comparison at the beginning of the video is Mercedes "Driver Assistance" vs Tesla "Autopilot". These are technologies for lane assistance, speed limit assistance, etc. "Advanced cruise control" is a better term for these. This video does not compare these.

Mercedes self driving system is called "Drive Pilot". Tesla's is called "FSD" (Full Self Driving)

This video compares FSD to Drive Assist, these are two different types of technologies.


I definitely trust Consumer Reports more than whoever these randos are, particularly because it was pretty easy to see the pro-Tesla bias from the beginning even without being familiar with their other content. And I would think anyone would have alarm bells going off given the unbelievable 44 interventions to zero interventions result.


>The user that created that video is known to have preference for Tesla (based on Twitter posts and their YouTube is all about Tesla), the video may be biased.

You are really understating this. This account is one of the few accounts that Elon directly responds to on a regular basis on X.


To quote one of the commenters:

> 2. They were using an non-geofenced level 2 highway adaptive cruise system for Mercedes, NOT the level 3 system which is only available in Germany. The level 2 system was never designed for city streets and have limitations on sharp turns, if this would an honest video they would compare autopilot vs drive pilot on the highway, which they won't do because drive pilot is better.


I'm sorry, how come the level 3 system is only available in Germany and yet Mercedes gets to claim it as the first certified level 3 system available in US market? https://media.mbusa.com/releases/automated-driving-revolutio...


>> The production-ready version of DRIVE PILOT will make its on-road debut in California and Nevada in late 2023 with a limited fleet of Level 3 equipped EQS Sedans

>> Mercedes-Benz plans for further customer deliveries of DRIVE PILOT equipped MY2024 EQS Sedan and S-Class models in early 2024 through participating authorized Mercedes-Benz dealers in California and Nevada


By many accounts, Tesla FSD has improved rapidly, now feels "very human," and will soon have greater adoption than any other self-driving software.[a]

Speaking from personal experience, I've tested the most recent version of Supervised FSD, v12.3.6, and it's much better than most people realize.

Once Tesla stops requiring human supervision, I expect FSD adoption rate to go "hockey-stick."

---

[a] Based on a conversion rate for Tesla "Supervised FSD" free trials of ~2%: https://old.reddit.com/r/RealTesla/comments/1cp5pgj/tsla_fsd...


Yes it's way beyond what most people realize now


I took. my wife’s Model X out for a moderately complicated drive last night and was very pleasantly surprised by the new FSD behavior.

Very significantly improved over the last few updates.


Since Mercedes Drive Assist is SAE Level 2 and supported only on highways, the high number of interventions is a feature and not a bug.

The Mercedes driver seemed to be engaged and attentive, and therefore not likely to play a video game, watch a movie, and stop paying attention causing a crash.

It would be interesting to see Mercedes Drive Pilot (SAE Level 3) compared to Tesla FSD. That feature is not generally available in the US.


Unless there is a change in how autonomous driving software is trained, it seems Tesla is destined to dominate that market, right?

Tesla has the most training data consisting of video input + corresponding driver behavior. And they have the most compute power.

Which twist of fate could happen, so that Tesla falls behind in the future?

I could imagine that there is an invention which lets AI learn about the world by just observing and thinking. Without the need for "training" in the form of "if you see this, do this". But as far as I know, this is not on the horizon yet.


No twist of fate, but self-sabotage in the form of being unwilling to use Lidar. Look how hard Waymo and Cruise are finding it to do robotaxis in a small geographical area and with Lidar. Tesla is trying to do robotaxis everywhere without using Lidar. They're solving a harder problem and are handicapping themselves while trying to do it.

Here's why Lidar makes sense. You want independent information streams that are cross checked. If both streams tells you there is no object in front of you, that's a way lower probability of being wrong than just one stream telling you that. You go from probability p to p^2 of being catastrophically wrong. It's the kind of probabilistic added safety of being in an aircraft that's capable of flying with only one engine. Even if one blows up with probability p, you're fine. You need a p^2 event for both engines to malfunction, assuming errors aren't correlated, which they probably are a bit, but not so much that the added benefit isn't massive.


That analysis only works when p is non-vanishing, though. In point of fact the "car drives into something it didn't see" failure mode doesn't really exist, or if it does it's at a level much smaller than that of the human driver its intended to replace.


It works regardless of how big p is. Engine failures in aircraft are exceedingly rare but this statistical effect of needing p^2 events instead of p events has saved many lives. Having multiple pilots has the same rationale. While errors are correlated, getting an error past two pilots is roughly p - p^2 less likely than if you had a solo pilot.

Self-driving, as a mission critical endeavor, needs to be leveraging the same statistical phenomenon in order to reduce the exceedingly rare but highly impactful mistakes.

And by "something it didn't see" I'm talking about CNN/ViT mistakes/hallucinations. I don't see any evidence that it occurs at a level that's much smaller than humans. Any safety data we have is with human oversight. So the majority of errors are caught by humans and won't show up in FSD crash frequency or other publicly available data.


> Self-driving, as a mission critical endeavor, needs to be leveraging the same statistical phenomenon in order to reduce the exceedingly rare but highly impactful mistakes.

Uh... why? Shouldn't the goal be to be safer than people?

Basically that's just bad engineering: If your argument is not to deploy an existing system that is better than the current installed base because some other technique might be better still, you are harming your metric (safety) and not helping.


I'm talking about robotaxis. There is no safety data available for vision-only robotaxis. It's a hypothetical system that we are speculating about.

If Tesla had one that's safer than humans, then yes they should be allowed to deploy it. My argument was that they're making it harder for themselves to exceed that safety threshold by insisting on vision only.


Counterpoint - Tesla is trying to avoid a local maximum where they are able to do robotaxi's in cars with $300,000 of equipment on roads that are precisely mapped down to the centimeter.

Waymo and Cruise may have beat them to "first robotaxi" but I'm not convinced that they have anywhere near a scalable, sustainable model.


This is analogous to saying "we're trying to avoid a local maximum where we are required to use expensive chemical propellants in order to fuel our space rockets; therefore we're going to double down on stuffing them full of larger and larger quantities of gunpowder".


That visual sensors only seem like they make no sense when they can get obstructed by weather


The main cameras are behind the windscreen, so cleaned by the wipers


I don't get this perspective. Somehow humans drive with visual input


Humans are terrible drivers, and even the human brain's reasoning capability is still light years ahead of anything Tesla is capable of shipping.

The advantage of machines is not that they have better brains than humans, but that they have better senses than humans, which is an advantage that goes out the window when you obstinately refuse to use better sensors than merely cameras "because that's what humans use".


That hasn't been my experience in a self driving Tesla. Against a setting sun in stop and go traffic on the freeway it was far better able to sense the road than a human alone and I think it has a lot of redundancy as well as views into all directions.


The goal is to be better than, not equivalent to, humans-- right?

I work on a synthetic aperture radar system that is high-resolution enough to "see" the painted stripes on roads through fog and a thin layer of snow.

It's not for automotive use and would increase the price of every car by several hundred thousand dollars but a fusion of multi-spectrum sensors should be the direction we are headed-- not a minimally-viable mono-sensor system.


Its enough to be as good as a responsible human driver, but consistently. The AI is never going to be tired, distracted, angry, drunk, and so on. Drivers not bringing their A-game is probably the most common reason for accidents.

I'd be nervous of automobiles falling into the US housing trap.

In other words, failing to realize that raising minimum requirements (and therefore prices) makes it unaffordable for an increasing number of people.

Human-parity (in terms of accident rate, not failure mode) seems a reasonable minimum bar.


New automobiles are available for the same price they’ve always been.

For proof I offer the price of the 1969 Volkswagen Beetle, the least expensive new car for sale in the US market for almost its entire sales history: $1800. That’s for one with zero options. A rolling chassis with four seats and a motor.

That’s $16k today. Coincidentally the same price of the Nissan Versa or Mitsubishi Mirage: with backup camera, air conditioning, and airbags.

People don’t WANT the cheap cars, though.

I know the average price of a new car has exploded.

That is a conscious choice by the consumer.

When production of autonomous tech scales it won’t increase costs as much as people assume.


Can SAR scaled across congestion ever be that cheap though?

Blanketing the frequency slice with echos seems like a non-trivial problem. Or would that help?


And a human with centimeter depth perception could drive far better. The goal of an autonomous car isn't to replicate human driving, it is to drive safely, accurately and in accordance to the laws.


This is extremely important to remember, especially when Tesla describe their neural net approach as being easily fine-tuned to different jurisdictions.

I’ve seen the “human-like” behaviour of FSD 12.x praised a lot by channels like this, particularly where the car is breaking the rules in a way they consider “normal”. And it’s a fair argument that predictable behaviour improves safety.

However, behaviour that is common in the US - like making a turn into a side street while a pedestrian is beginning to cross - would be considered exceptionally aggressive and reckless here in Australia. It’s a cultural difference I’ve adapted to when moving back and forth.

At the end of the day though, when I walk across a street, I don’t want to have to worry if Tesla has fine tuned their model correctly to match our local expectations of yielding. I’d rather they just followed the law as closely as possible - because that’s the most predictable behaviour of all.


The car sees as well as the driver does. Is that good enough for all circumstances? No, clearly not. But refusing to drive in situations that are unsafe for people too sounds like a feature, not a bug.

I mean, let's be blunt: the "LIDAR vs. cameras" debate has been settled at this point, and the fancy sensors lots. Of all the things one can complain about with FSD, sensor fidelity is not one. Teslas don't hit things. The problems remaining to be solved are in the planning regime: my car still misses turns at a rate somewhat higher than I do, usually because it's in the wrong lane (and that often because it thought that the backup in the turn lane was something it could go around).


Data is just cars driving around. Mercedes Cars have cameras too.

Compute power is just money, that is about the thinnest advantage you can have.


Not everyone has billions of dollars lying around. SpaceX couldn’t have beaten ULA if that was the case.

The truth is no one has the physical AI algorithm yet. Having cars equipped with cameras, big GPUs connected to can bus is the easy part.

Tesla has some advantage since they are spending the most on automation compared to other car makers.

However compared to Waymo, Tesla is quite behind. Waymo has done a million rider-only rides.

Tesla isn’t even certified to do driverless rides. The driver has to be at the wheel and all accidents are the driver’s fault.

Tesla also doesn’t take safety as seriously as Waymo. So Elon may be a net negative to the org.

If the competent Tesla engineers got similar car hardware to Waymo, perhaps they could have been competitive. LiDAR solves a whole bunch of vision problems and gives extra high quality signal for processing.


>Not everyone has billions of dollars lying around. SpaceX couldn’t have beaten ULA if that was the case.

But SpaceX didn't have infinite amounts of capital during the years it developed Falcon 9 and Dragon. Until Tesla's market cap blew up during the COVID-19 era, Elon Musk had a "mere" few tens of billions of dollars. ULA's pockets were and are gigantic, too.

In any case, infinite capital guarantees absolutely nothing. Jeff Bezos has been among the world's wealthiest men for far, far longer than Musk's entry into that group. Let me paraphrase an excellent comment I saw on Reddit, in response to one of the usual lies about how the only reason SpaceX is a decade ahead of the rest of the world is that it got zillions in subsidies from the US government:

>If large amounts of funding is the only thing required to succeed, Blue Origin would now have a nuclear-powered spacecraft orbiting Pluto.


>Not everyone has billions of dollars lying around.

Mercedes can get them. So can any major car maker. Making cars is extremely capital intensive.


I see Teslas daily I’ve never even seen a Waymo.


I couldn't actually find the amount of data that Tesla uses for training, but I'm curious: how much is it, and is it larger than what comma.ai has?


I would expect that they have access to the cameras of all Teslas on the road, which should be about 5 million:

https://www.statista.com/statistics/502208/tesla-quarterly-v...

And I would expect that those generate more data than they can crunch, so that the bottleneck is the number of GPUs they have.

They recently reported that they have the equivalent of 40k H100 GPUs:

https://twitter.com/marekgibney/status/1784133742983860307


they are collecting 1B miles of data every 2-3 months

https://www.autoevolution.com/news/elon-musk-explains-the-la...

they will be using 85K H100s (up from 35K now) for training by the end of 2024:

https://www.shacknews.com/article/139619/tesla-ai-training-n...


That's not what the article says.

> This vastly expanded the FSD testing pool, with the Tesla fleet adding 1 billion FSD miles every 2-3 months

It doesn't mean they collect the video feed and all input from every second of those miles. Just that the FSD drives those miles.


Yes, they probably dont need to. But if you have your configs to share data with Tesla, and you have an intervention while on FSD, that small few seconds of video gets uploaded to them. That is the important data.


They can use every Tesla vehicle for training. They definitely have more data than anyone.


They can. But do they? Comma can collect from any vehicle, regardless of type.

There's also a matter of deduplication where they really don't need to use ~250 days a year of the same commute. They probably have more data than others, but I'm asking for a proper confirmation of this common knowledge.


Tesla's vehicles lack the advanced sensor suites of other manufacturers, which means that Tesla's moat amounts to a massive quantity of lower-quality data. When it comes to navigating using lidar and radar, they're dead last.


they seems to be very far ahead of anyone else, the new versions are pretty insane.


How do Mercedes and Tesla differ in terms of accountability when there is inevitably an accident? Tesla is known for turning off their assistance at the last possible moment in order to blame the driver. Mercedes assumes liability.

Seems like that is the real debate. Tesla weasels and whines, Mercedes steps up.


Yes, this is the beginning and ending of the conversation.

If Tesla (or any other manufacturer) actually has faith in their implementation, they will accept legal liability for its actions.

Contrapositively, if Tesla (or any other manufacturer) do not accept legal liability for the actions of their autopilot, then they don't have faith in its implementation.

Everything else is irrelevant snake-oil marketing fluff. Put your money where your mouth is.


Doesn't Mercedes guarantee level 3 only on certain stretches of road? Shouldn't a better comparison take place there?


Not just the road. Drive Pilot is a radar based following technology. It won't drive on open roads at all, it can only follow a car in front of you that it has affirmatively detected on radar (the point being to eliminate the false positives that radar is known for). If it changes lanes or gets too far ahead the system will disengage. Likewise there's a speed limit of 60kph or somesuch, etc...

It's mostly a stunt, basically. It's an answer to "What's the fastest path to SAE Level 3 in any context", not "How to engineer an autonomous driving solution?"


Uhh no, it’s called good engineering versus over-ambitious marketing. Systems are designed for specific operational design domains and they are expected not to work outside of the ODD. The key thing is that the operator needs to understand what the ODD is and whether they’re inside or outside of it. Mercedes has a very clear ODD with very clear guarantees inside of it. Tesla does not have any established ODD nor any guaranteed performance inside of it.

Mercedes is incrementally improving a system the exact way you’re supposed to engineer such a system. It is not an abuse of the SAE Levels framework, it is exactly why the levels are what they are! It is Tesla that is pulling off stunts.


Watching this video, I am impressed by how well FSD works! Watching FSD crash videos, though, I am reminded by the many spectacular failure modes of SAE Level <4 driving systems on public roads.

The NHTSA reported in April that most fatal FSD crashes involved no driver intervention, or driver intervention only in the last 1 second, and in all cases an attentive driver could have perceived and taken steps to lessen the severity of damage and injuries in the crash.

For anything less than SAE Level 4, the human MUST take over driving when requested. The question is, can we? Sign up for the beta test and find out, I guess!

Personally I would much prefer a new Mercedes E class wagon with Driver Assist (Distronic) to a Tesla with FSD.


Frankly, I do not see a justification for this technology at this point. Sure it’s good but nothing is really comparable to an attentive operator when even one multi-thousand pound hunk of deadly steel is hurtling down the road, let alone hundreds and thousands, and lives at stake. It simply is madness.

> Frankly, I do not see a justification for this technology at this point.

Drunks and the drowsy. Sure, they should have called an Uber or chosen not to drive at all. But the road is safer for having that multi-thousand pound hunk of steel steered by a more competent and attentive driver (FSD)


Self driving cars should exist because careless people might not drive safely? That’s interesting. How about we just make it illegal to drive drunk or drowsy?

It already is illegal (certainly to drive under the influence). But consider even someone driving "buzzed" (not "drunk", and maybe not even strictly over the legal limit): is the roadway safer with or without "Full" Self Driving technology?

Surely this is sarcasm. You'd prefer a drunk person getting into a Tesla with FSD vs. an attentive and alert person driving (anything else)? I'm sorry but being drunk and getting behind the wheel, that's an automatic fail. I don't care what you're driving. You shouldn't be behind that wheel in a position of responsibility when you are under the influence. I include 'thinking the Tesla will just handle it' as under the influence.

My point is not that a drunk "should" drive, even with FSD. It's that clearly they do. I know someone personally who drives under the influence frequently despite the legal and material risks. No one thinks this is a good idea.

I would absolutely prefer any driver using FSD, but especially one physically and mentally inhibited by drugs or alcohol.

> I include 'thinking the Tesla will just handle it' as under the influence.

I've commented on this before, but: no one using FSD for even a short period of time is "under the influence" of the branding. The lived experience of the technology is that is amazing and useful, even if it flawed. YMMV


I was surprised to see so many driver interventions on the Mercedes side - 44 vs 0 by the end of the 20 minute video. Maybe that's why it's branded "Driver Assist" and not FSD, so not a true comparison.

Also, a little funny to see a random Model Y appear just ahead of the Mercedes at the end of the video.


It’s hard to tell but looks like a lot of those interventions were just him accidentally taking it out because he was keeping his hands off. And because of his bias and comfort with FSD would let the Tesla get away with more.


I didn't watch through the whole thing, but the first 12 interventions were definitely valid interventions: “we're drifting again, oh, we're on the yellow line again”, etc


I was blown away recently by the sophistication of Tesla self-driving when I visited the bay area.. the one mistake it made was missing the correct path within SFO to drop me off at the correct terminal... that was a surprising mistake and I didn't have time to see if it would circle around and get it right this second time


Isn't the premise of this video of the Consumer Reports comparison a bit of red herring?

It seems like Consumer Reports is comparing Mercedes Driver Assistance with Tesla Autopilot and then the video goes on to compare it to Tesla FSD.

Obviously, the average person doesn't know there's a difference and may be misinformed by CR to believe that Tesla sucks, but we can acknowledge here that Tesla Autopilot is very meh compared to FSD, and it's entirely feasible that Mercedes has a justified better rating between these systems, even if FSD is light years ahead.


Well either way, with all the articles about Mercedes having the first Lwhatever self driving system, you would think it's actually better than the FSD.


Same deal with openpilot. Comma team claims they’re “solving self-driving cars” anytime people mention perception improvements or quality of ride concerns. openpilot basically can’t turn at intersections anymore.


Apples vs oranges


And yet speed limit detection and phantom breaking in my Tesla has only gotten worse the last year, or do FSD improvements not trickle down to non-FSD?


I dont think they do, i have not had a phantom break since 12.x came out.


It's disingenuous to make this kind of claim without stating what version you are using.


Latest stable version I can get, v11.1 apparently


Tesla has spent more billions on ML than any corporation in the world. I can’t wait to book my first FSD robotaxi ride at some point in the next decade.


You can book a robotaxi now with Waymo. Tesla has been promising that for years.

Tesla may have been first to seriously try but currently there's no evidence they will be first in anything compared to their competitors.


People who make the Waymo argument fundamentally do not get it.

Waymo works because they have mapped out cities with maps and millimeter precision 3D scans. It only works in places with good weather. The rules of the road are hardcoded in millions of lines of fragile code. To top it all off, they are basically driven entirely by remote operators.

FSD works anywhere and doesn't rely on millions of dollars worth of equipment and pre-mapped environments. Tesla's technology is scalable in a way that Waymo never will be. You can buy a Tesla with FSD today for less than 50k. Google will eventually realize that Waymo is burning cash with no path to recovery, and shut it down just like every other Google product.


Tesla promised to make your own car a robotaxi, which is very different from stringently controlled Waymo cars. Tesla has to deal with customers doing stupid shit with their cars.


Mercedes? Pick an easier competition?

As much as European cars have a favorable reputation among American consumers: They are awful!

If you do benchmarking, you are going to find European cars have lots of appearance parts that are high quality, but the actual components are poor.

A more realistic comparison would be to compare Tesla vs Ford or GM. These are 3 companies that have similar tech/culture standards. This video is like comparing AMD to Apple.


Mercedes is the first company to sell vehicles with SAE Level 3 Autonomous Driving enabled in the United States [1].

[1] https://news.ycombinator.com/item?id=40098468


> This video is like comparing AMD to Apple.

What do you mean by this?


Had you watched the video before commenting, you would know the point of the video is that Consumer Reports gave Mercedes self driving a much higher rating than Tesla. Video shows how awful the Mercedes software really is


It seems the video is comparing a different Mercedes driver assist system to FSD?


Consumer Reports must have done similar tests to what is seen in the video (eg. just driving with these systems), yet they came out with the result that they did. What happened there?


One of those sources isn't fairly well-known for being obsessed with Teslas (or anything Musk-related really), the other one is.


Simple: The video is comparing apples to oranges.

They are comparing self driving to driver assistance.


Nice cherry picking, I occasionally see a (ridiculously oversized) Chevrolet (Corvettes I think) on the road here in Britain and the build quality is utter shite, not to mention how fugly they are but I suppose that is subjective.

How many McLaren, Ferrari, Lamborghini, Porsche, etc equivalents does America produce? Beyond most of those examples there is good reason “German Engineering” is an oft-repeated phrase. See, I can cherry pick too (Europe also produce Fiats, lol, “fix it again tomorrow”).

EDIT: and tbh I don’t even agree with your cherry picked example either, every Tesla I’ve been in has had terrible build quality and been a horrible ride.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: