A bit tangential but I just got a Tesla Model 3 and paid $6,000 extra for “enhanced autopilot” buying into all the marketing about it, and its one of the biggest regrets of my life and makes me unable to appreciate an otherwise great car. I just feel sick about it. The enhanced autopilot is buggy and doesn’t save me any effort. The summon and auto park don’t really work. I think it’s a scam. Other than that it’s a great car.
Agreed on summon and auto park, they are total garbage and not at all useful. Navigate on Autopilot is slightly better, and can actually be nice occasionally, though there are plenty of situations where it just doesn't work. But there is a bit of the enhanced autopilot package that is genuinely useful, and that's autopilot lane changes. Being able to execute lane changes on the freeway just by pressing the turn signal once and without disengaging autopilot works well and is useful. Is it worth $6k? Maybe not, but it's something.
Personally, I bought FSD because I wanted to test it, not because I thought it would be robotaxi-level anytime soon. And also because I believe that Tesla will one day be forced to give free camera and computer upgrades to everyone that purchased FSD (on top of the HW3 upgrade I already got), because the current in-car cameras and computers are not enough to implement what they promised.
My new 2023 BMW M340i also has lane change, lane keeping, radar/adaptive based cruise control and under 70kmh on the highway in traffic you don't even need to touch the steering wheel.
Ya, the Tesla basic Autopilot & Enhanced Autopilot both are on the same codebase that has received minimal updates since the start of 2020 when they dedicated 100% of their AI towards releasing and improving upon the actual FSD Beta stack. The FSD Beta is quite better at lane keeping, including general 'avoiding obstacles' and whatnot.
>The FSD Beta is quite better at lane keeping, including general 'avoiding obstacles' and whatnot.
Tesla always has something just around the corner that's "better", and they get full credit for it as if already exists. Like their robot tapping into a multi-trillion market.
Lane change is also buggy now for my Model Y. Since the last couple updates about 5% of the time it jerks back into the current lane after moving about 50% across the line, as if it had suddenly seen a ghost in that other lane. There is nothing there of course, but its jerking motion back to the current lane is very frustrating and embarrassing. Other drivers must think I'm having a seizure.
I drive the car 100% manually now. I don't even use cruise control it's that bad. $60K on a car that doesn't even have usable cruise control is disappointing.
My experiences with FSD tell me that it will never fulfill its promises and I'm just out $10K. Some other manufacturer maybe could do it, but not Tesla under its current leadership.
I'm hoping Lucid can pick up the mantle where Tesla left off.
> Personally, I bought FSD because I wanted to test it
Funny how Tesla is able to pull this off. I can't think of another brand that so easily gets people to throw large sums of money at it while expecting so little in return.
I wonder what percentage of Telsa car owners are also shareholders.
Nobody gives Toyota $6k for no good reason, but TM is not in the same league as TSLA.
Toyota isn't even really offering something "new" though? People often throw money after random gimmicks to see what things are like as an early participant. I know people who had early VR headsets and I remember my father getting early portable computers (that weren't quite portable enough ;P). You learn what works and what doesn't and maybe it helps you in your job or maybe it just gives you something to talk about over dinner, but it can be fun to see a car almost (but not quite ;P) drive itself.
I considered working in the self driving field for a while. Being an early adopter puts you in a great position for that kind of thing.
While I didn't take advantage of it for self driving, or the iPhone (though I did buy the first one), I did for VR (backed the Oculus Rift Kickstarter and then made a career in VR) and deep learning (jumped in early and also made it part of my career) and Bitcoin (not part of my career, but still good).
I consider the cost of being an early adopter of things like FSD money well spent given the opportunities it can create for you. I think a lot of people are making money on YouTube simply recording themselves driving around with FSD Beta on.
> I think a lot of people are making money on YouTube simply recording themselves driving around with FSD Beta on
And the videos with the most hits are the ones where it almost gets the driver or nearby drivers or pedestrians killed.
I have a hard time putting FSD in the same category as VR or the iPhone, because those can't kill you. When you bought your first iPhone, I would bet you had high hopes that it would be a nice project and do what it said it would do, in contrast to your admitted low expectations when you paid for FSD.
So little? I'm testing the most advanced driving system available to the public. It does things no other available system even attempts to do. It's pretty cool to be able to try it first, and I don't consider that 'little'.
Also, FSD was $3k when I bought it, so that helps, and it came with a guaranteed computer upgrade too. I wouldn't pay the $15k it costs today.
I'd say you've been suckered into doing Tesla's testing work for them. Worse than that, you're even paying them for the very great privilege of being an unpaid intern.
It's perverse. They advertised at you and I think we have to say the manipulation has worked.
Because the parking code hasn't been touched in years. All their effort right now is on driving, and that part is improving rapidly, far ahead of other car manufacturers, obviously with a long way still to go.
Being an early adopter of technology always involves paying to test unfinished work. Even the first iPhone (which I also bought) was deficient in many ways. And yet being an early adopter has brought me personally a lot of benefits. The world is not zero sum, turns out.
> Funny how Tesla is able to pull this off. I can't think of another brand that so easily gets people to throw large sums of money at it while expecting so little in return.
The tech-bro -- basically us here -- crowd thinks that they're immune to marketing. But they get suckered in just as well as anyone, it's just a matter of playing to different preconceived notions.
> I believe that Tesla will one day be forced to give free camera and computer upgrades to everyone that purchased FSD (on top of the HW3 upgrade I already got), because the current in-car cameras and computers are not enough to implement what they promised.
This is called "specific performance" [1] and it is extremely rare for courts to compel fulfilling a contract when the standard approach of monetary damages will suffice. At the very worst case scenario for Tesla, they could be forced to refund those upgrades or even buy back the cars. One big way specific performance is used is if someone contracts to transfer real estate or infrastructure to be used as part of a bigger development project, but tries to pull out. Or if a subcontractor partially fulfills their duties as part of a bigger project, but is holding up key components.
Imagine someone has contracted with a tax accountant to help them respond to an audit. The contract says the client can fire the accountant and end the relationship at any time. The contract also says that if the client fires the accountant, they must pay $1000, and once payment is received, the accountant will transfer all records to the client within 24 hours.
The client fires the accountant and writes a check for $1000. The accountant refuses to take the check and refuses to transfer the files. A court might order specific performance that the accountant must fulfill the terms of the contract. This kind of issue is easy for courts to mandate specific performance, it is pretty straightforward to determine if the files have been transferred or not. The client faces monetary damages for sure if they don't get the files to give to their new accountant. But monetary damages aren't really a good way to make the wrong right.
But assuming a court sides against Tesla, what is the most commonsense remedy to make the wrong right? Refund the sales of a product that was not as advertised, or force the company to make the product into what was advertised? Courts don't have the kind of expertise or understanding of what is technically possible in future technology development.
I'd accept a refund or buyback too, no problem. But I don't think that a court ordering specific performance is the only way that early FSD buyers might get free or discounted upgrades. That's not really what I was expecting.
If Tesla was ever ordered to refund every FSD package ever sold, how much money would that take? What sort of hit would the stock price take as investors priced out the "solving FSD will be worth a bajillion dollars" premium? I'm not sure Tesla could refund all its FSD sales.
I do believe it will work. Eventually. After hardware upgrades. And it's the only system available to the public that even attempts to do some of these things, so it's the only option if you want to test early. OpenPilot is cool too but it doesn't do everything.
> I do believe it will work. Eventually. After hardware upgrades.
Ok but that's not what Elon ever sold, even once, even a little bit.
He's promised full self-driving is one year away for each of the last nine years. [1]
So I'm happy you paid a huge sum of money to beta test a product on a completely unrealistic timeline, but that's not going to suffice for most customers. I have to imagine that many, if not most, people took the company at their word.
Yes, Elon has been and continues to be inexcusably optimistic on AI timelines. I think it would be reasonable to fine Tesla for false promises on timelines and on stuff like "all cars contain all necessary hardware for FSD". But being wrong about those things doesn't warrant criminal charges. I was completely aware that the timelines were wrong at the time, but I also believed (correctly) that FSD would be available in a testable form eventually and had no problem spending 5% of the purchase price of my car to participate in the future testing.
It's only lying if he doesn't believe it. And he demonstrably does. He has a whole division at Tesla designing a humanoid robot for mass production in the next few years on the assumption that the AI will be ready by then. He keeps removing sensors from their cars on the assumption that AI can replace them in a few months' time. He started a brain implant company with basically no near term market potential because he believes it will be important for interfacing with the upcoming AIs. If he didn't believe his own statements about AI he wouldn't be doing those things. It may be borderline delusional, but that's not the same thing as lying.
Consider that in the past plenty of other people in the industry have underestimated things that he's overestimated, such as the potential for reusable rockets, and the timeline for electric cars becoming viable, and the timeline for low-cost phased array antennas to support Starlink. And those have been incredible success stories for him, despite his initial overestimation, which really was overestimation, that many called stupid at the time, with some justification. And AI may yet turn out that way too. Even if he's wildly optimistic about timelines, he may still end up closer to correct than the conservative establishment in the industry which has a very strong status quo bias, like with disposable rockets and internal combustion engines.
Even if both Elon and Holmes believed they were doing the right thing, that doesn't mean they weren't negligent (broadly that they should have known better) or that they didn't cause material harm to those who relied on their representations. This creates both civil and criminal liabilities. I guess what I'm trying to say is that I'm not sure it does.
I said it means something, not that it absolves them of any responsibility whatsoever. I already said it would make sense to fine Tesla. But the whether they were mistaken or malicious still matters a lot.
Autopilot is basically worthless to me. I use it for a couple minutes at a time at most. The first time it slammed on the brakes on an empty highway, taking me from 70 to 40, I completely lost faith.
> I believe that Tesla will one day be forced to give free camera and computer upgrades to everyone that purchased FSD (on top of the HW3 upgrade I already got), because the current in-car cameras and computers are not enough to implement what they promised.
I agree the hardware won't be enough, but I find more likely that the software will actually get available after your current car has passed its normal lifetime, and you'll have gotten rid of it already.
Well, I kept my last car 13 years, and so far this one has actually been more reliable than the last. I paid to get early access, which I already received, so I consider it worth something already. Based on the pace of improvement, it's a pretty safe bet that it will be at least fairly useful before I'm done with the car a decade from now.
I agree, lane changes are a really nice feature for me that made it worth it although depending on how price sensitive you are it might not be worth it. I think people forget or don’t realize that tesla is purposely pricing this stuff for the kinds of people who think $6,000 is pocket change. If that’s not you, don’t buy it.
You're still responsible if your lane change causes an accident.
I don't see how it's worth $1. Do you trust the system to properly gauge the speed of a vehicle accelerating to pass you? If not then you need hands and eyes at the ready.
IMO the boundaries of cruise control’s abilities are extremely clear, so it doesn’t require a ton of overhead to know whether you’re pushing up against them.
I hired an annoying little Renault a couple of years ago. It would quite happily park either parallel or end-on, did a fairly reasonable job of it too. I didn't use it much, because it's almost as easy to park a car that size manually as it is to hit a couple of buttons :P. I think the difference between that and Tesla is that the Renault still wanted me to put the (manual) gearbox into reverse and use the accelerator to move the car, it was only controlling the steering wheel.
In my experience (Model 3 and Y), Basic Autopilot works very well on highways and is easier to use than the competition. However, enhanced autopilot and full self-driving do not provide benefits and are still unsafe outside freeways. They are used by Tesla to increase margins and hype their brand as innovative.
(One of) the things that pisses me off about this is all through Tesla's marketing and website, the consistent undertone is that these are complete and working, but that much of EAP and FSD is "just awaiting regulatory approval", when that's far from the case.
Of course, it's very convenient for Tesla, too, if it never happens (and I think we are a decade or so at least from anything), they will just point blame away from themselves, ignoring that what they've offered would never reach the barrier for regulatory approval. And there will be the usual cries about the short sellers, and "big oil" pressure, rather than placing blame where it lies, Elon's cowboy-esque attitude to the whole thing.
I think one of the the questions being asked here is if there is not something a bit more sinister than just being a cowboy at play, but that it is a ruse to make money. I was literally asking myself that question this morning before I heard of this case. I had no answer (mainly because there is not enough evidence against the hypothesis). I am interested to see what, if anything, they manage to unearth.
Even ignoring any possibility of raising more capital, the amount of revenue recognized from FSD is much smaller than the financial hit that would have been required to bankrupt the company, at every point in time.
You're right, that never really appeared in the annual reports. So either Musk lied to CNBC, or the annual reports played down Tesla's actual position.
The car itself is wonderful. Driving a Tesla is a level beyond anything else I've owned.
Autopilot on freeways works great. I disagree that it's deceptive. The name is confusing to the average person (it sounds like it drives itself without help). But there are constant reminders that you need to pay attention and be ready to take over. Stupid people will be stupid. A name change wouldn't be the worst outcome, but it's a great feature.
Navigate on autopilot is not great. It's kind of useful, but makes dumb lane changes and is too timid in traffic to work. Being able to switch lanes with a turn signal is useful and I use it.
Auto park is garbage and only works on very easy parking spots. Half the time it doesn't even "detect" the spot and won't offer the option. It's also not that good at parking.
Summon (classic) is useful for me. I use it to move my car out of the way to take out the trash. It's cool and not dangerous, though not super useful.
Smart summon is dangerous and not useable. As a previous commenter said, it has serious "nervous raccoon energy" and either gets confused or nearly hits things.
Full self driving is not even close to ready. It's nowhere near what was promised (get in your car and it drives you from A to B). It's unclear to me that this is even possible with the vision hardware in the car. I like playing with it, but I've never had a successful drive without having to take over. This was massively overhyped, perpetually pushed out, and honestly I agree with the fraud case here.
I'm mostly annoyed that all the broken features have sat stagnant while the company works on FSD releases that incrementally improve something that's fundamentally not even close to working right.
I just want to add a hearty “+1” to the everything rzimmerman said here!
I do not recommend people pay for summon/autopark/fsd/navigate. The basic autopilot is pretty good for highway driving and great that Tesla includes it free though. The simple summon is fun and works most of the time but not worth paying for.
In this case I mean that choosing “autopilot” as a name was probably not meant to trick anyone. Just an overestimate of the layperson’s understanding of the aircraft equivalent.
There was never a good reason to use AI for self-parking. Computer-vision based self-parking systems are perfectly functional and robust (see Mercedes from 2014).
Interesting - we have it on our first car, love it, and just paid for it a second time on our second car. We regularly make 3-5hr trips with it driving. Maybe something is wrong with yours?
TristanB9 of SF by chance - that’d be a blast from the past! At any rate - Hmm. Know loads of people with Teslas and never heard of anyone going over 10 minutes on AP unless it’s just down a single lane of interstate or something. What kind of 5 hour trips are you doing?
Are you letting it go, at length, doing anything more than a pilot assist does? I have gone > hour on pilot assist but that hardly qualifies as self driving and doesn’t free me from driving duties - which is what AP is supposed to do
Not sure what you mean, EAP is highway-only and using it off the highway is off-label and not safe. I have tried FSD beta and it was not great at the time (10.12.2)
The longest stretch I’ve used EAP for is >3 hrs going from MA to NY. Highways were totally smooth, did its own lane changes, etc.
It can be slightly jerky at times when the traffic keeps speeding up and slowing down sharply, that’s my only real complaint.
Same here. Model 3, purchased 1998, FSD beta since December 2021. FSD isn't perfect, but it works. It's about as skilled as an average student driver, which means I am hands-off but still have to be vigilant for mistakes.
Pucker factor influences any current-generation self-driving. I've given demos to friends in the passenger seat who have had such strong negative reactions that I've had to end the demo -- as in they want to reach over to grab the steering wheel to save our lives in mid-turn. This reaction isn't totally surprising; I've been in human-driven cars with people who simply cannot be a passenger because they're so jumpy.
It takes a leap of faith to let the car drive itself, but with that faith and the requisite attention, the experience is good.
>FSD isn't perfect, but it works. It's about as skilled as an average student driver
No, it isn't. Do you think "student drivers" go around banging into things? Comparing the two is ridiculous and dishonest; FSD regularly gets fooled by things no human would ever be confused by.
I took a shower since the first version of this comment, so the following edit is an actual shower thought about your comment.
If self-driving vehicles succeed, it will be because they're safer than human drivers. That's obvious. But we probably won't be able to cherry-pick an arbitrary driving metric and show that computers always beat humans. Computer drivers will be safer than humans, but they will make different kinds of mistakes. Instead of 40,000 humans dying in auto accidents each year in the US because of inattention, DUI, sleepiness, overreaction, or poor training, we'll have maybe 1,000 humans dying because of ridiculous computer behaviors. And that 1,000 probably won't be a subset of the 40,000; they'll be 1,000 completely different people.
That will seem weird. The 40,000 deaths feel like a natural part of life, but the 1,000 will feel like a grave injustice.
Yet my computer thinks 1 divided by 10 is 0.10000000000000001, and I still use it for arithmetic.
Curious, what did you expect here? As far as I can tell, enhanced ap is identical to the standard "basic ap" on all cars, the only additions are:
* automated lane change when you press indicator
* Summon (a google tells you all you need to know about how bad)
* Autopark (this has never worked on my model 3 either, aside from a handful of times. Again, common knowledge it doesn't work).
* Automated freeway exit selection
Otherwise, its the same pretty good lane-keep system they give you with the car as standard.
Of course, Tesla being Tesla I'm sure the feature set will change with time, so maybe you get more later? Tesla don't promise this though, and like all tech you should evaluate purchase on its features today, not its promises tomorrow? I might have missed something, but generally thats all it adds over the basic AP you got with the car, and it's quite clearly documented as such. I can certainly agree I'd feel sick too if I'd paid $6k for it in its current advertised form!
I thought the "navigate on autopilot" feature was basically FSD but on highway only. But no, it just suggests lane changes that are dumb most of the time. Yes, I can only blame myself really, but I can't believe they would even sell such a worthless package for $6,000 more.
Yea Autopilot is just adaptive cruise control on every other car maker. Such an awful term. Full self driving is the in city driving, negotiating stoplights, turns etc.
I think this is only confusing because most people’s only understanding of autopilot is what they watched in movies when the pilot flicks a button and leaves the cockpit go fight a villain so they imagine autopilot means FSD which it isn’t at all in its original use in planes either.
I had the option of paying $3,000 for FSD back in 2018, declined, and am still convinced that was the right decision. Have no real complaints about EAP, apart from its phantom braking issues, which seem to have improved with time. I suspect, part of how well EAP works may be how well the training data match your locale.
I am on FSD beta since Nov of last year, it's an absolute game changer one minute, and a total teenage driver next minute. But every update makes it better , new features , tweaked turning radius, the microsecend it takes to move forward - every bit of detail is tweaked.
I use it all the time - it takes me to dentist, drives me home - every single drive. It actively monitors traffic lights when stopped & when it turns green - it chimes, blinking lights, yields, left turn yields, pedestrians, bicycles on the road, dogs and blind spots - all of it gets managed by the car with just 5-10% of my attention. It doesnt let me pickup my phone (I get a warning) or fiddle with spotify. I need to keep my hands and be active, but I am not anxious about blind spot checks or bicycles in the dark.
When it is released to the general public, it will be groundbreaking.
The (mobileye) autopilot (version 1) is actually useful and good within its limits. It parallel parks. It summons (in a straight line, like out of the garage). It will maintain speed and distance with traffic aware cruise control. Add autosteer and it keeps its lane well. (autosteer does hunts a bit when roads get super-curvy). But it generally helps drive the car.
Is autopark that bad on the model 3? To parallel park I just pull past an open spot (to let the sonar sensors see the gap) and when the P symbol shows up, put it in reverse and press [Start] on the display. It usually parks better than I could especially with tight spots.
Self-driving is currently in the 'valley of disillusionment', much like 3D printing was about 3 years ago. 3D printing currently doesn't make any headlines, but the industry quietly adopted it.
I think self-driving is going to go through similar changes.
I agree. Self driving, VR, and 3D printing all feel like they are in the same boat together. It’s tech that if it was a little better it could get full adoption but the current solutions feel like they aren’t enough for full adoption. Self-driving lacks because what’s the point if you have to self monitor at the same time, VR because the resolution and frame rate are barely better than a desktop computer so the immersion isn’t there, and 3D printing lacks because it’s extremely slow, struggles with certain designs, and needs sanding after - again all are missing the primary problems they mean to address and are only half-fixes. Why do I want to 3D print something when it’s going to take me 2 hours to sand down the ugly edges also? Why do I want full self driving when I have to stay just as alert? Why do I want immerse myself in a virtual reality that feels so fake?
I don’t think it’s a scam. It’s just Elon getting ahead of himself. And reality sometimes just needs a little to catch up. Both Tesla and SpaceX would not exist if his mind didn’t work like that.
If Summon doesn’t work, try turning on Bluetooth on your phone.
I'm surprised, maybe I was blinded by pro-tesla channels, but some recent videos about users trying the latest "autopilot" update shown very impressive results (crossing+insertion into low sight high traffic roads).
Okay, what about $10,000? That's what I paid. I took delivery in August 2020 when he was still making the empty promises. Biggest investment failure I've ever had to write off. I guess if it isn't a Bernie Madoff level fraud, it's all good though. Maybe not prison, but I believe Elon Musk's citizenship ought to be revoked at the very least.
Apple Lisa was $10,000 when it came out. But it, you know, worked. Came with word processor, spreadsheet, and drawing programs included. A 5MB disk drive, even.
This is what new technology has always looked like.
Going from the discrete alphabet soup of ABS, ACC, SC, SLAM to a continuous mechanism is brand new, and not some repackaging treadmill for management to convince investors they're busy.
I think that’s what makes it so funny. It’s not a humblebrag at all. The guy is being so earnest that the silly feature he spent $6,000 extra on top of his fancy car is one of his ”life’s biggest regrets.”
I always thought the Autopilot name and the claims about having all the equipment for FSD were basically fraudulent and Tesla would be liable for false advertisement. Never made the connection of Fraud => Someone driving negligent as a result => Negligent Homicide/Manslaughter but it seems solid and somewhat obvious once stated.
I've generally always considered level 2 or 3 autonomy to be a recipe for disaster, humans can't be expected to remain alert and respond quickly when they aren't doing anything. But at the same time the way Tesla autopilot woks is congruent to autopilots on boats or airplanes. Those won't avoid other vehicles and are usually perfectly happy to let you crash into shoals or mountains.
One significant difference between Autopilot in Teslas and autopilots in airplanes is that in an airplane you typically have to prove that you can use the autopilot correctly in order to be allowed to fly that plane.
Another difference is that piloting a boat or a plane is usually much less dependent on quick reactions than when you are driving a car. The skies and seas are much more open, with much less traffic than on the road. On a well trimmed plane, I could probably sleep for 10 minutes and there is a good chance nothing will happen. Obviously, it is excessively dangerous, but I am not almost guaranteed to crash as I would be on a car. Flying or sailing is more about precision and planning than it is about reacting.
Having crossed oceans on autopilot, I’ve had three very, very near misses - so even in an environment which is virtually an uninhabited desert in comparison to a road, the risk in not maintaining appropriate and proactive watch is real.
Plus, if something bad does happen in an airplane, the flight crew typically has on the order of minutes to solve or mitigate the problem, vs. mere seconds in a car.
This is a big difference I think. Humans will get distracted but that's not such a problem if you have a reasonable amount of time to get back into control of the situation, but that's not possible in a car.
We have a small and cheap Japanese car that features adaptive cruise control and line assist that works pretty alright. So it’s essentially like line following robot that can adjust its speed to keep a safe following distance and do hard breaking if needed.
It’s feels almost like full self driving when you drive on the highway.
It makes the ride significantly less tiresome but I would say it definitely reduces my attention.
I can totally see how people with more capable systems may treat the car as intelligent enough to drive %100 by itself and became negligent.
Mine has radar cruise control and "lane assist", which gives you a nudge back if the car drifts, but won't try to follow the lane by itself. Instead, it just beeps at you angrily and stops you killing yourself.
I think that's a good balance, because it's giving you alerts but you're still the one driving.
Yes I have the same thing and it's great. The radar cruise control allows me to stop being focused on maintaining speed and following the car ahead of me and let's me put more attention towards noticing traffic developments ahead, scanning the state of cars around me, and keeping an eye out for various hazards. I wouldn't want lane centering (as opposed to lane keep assist, which I have), because it would be too easy to shutoff and not pay attention to what's going on, and the window between realizing the lane centering is doing something wrong and needing to take over from it is too small for comfort given human reaction times.
Turns out, at least in my own personal experience, that’s not how it works. I don’t think this has been well studied, especially in the context of driving in a vehicle with gaze / attention tracking features.
While my Tesla is working to keep the car perfectly centered in the highway lane, it frees me to focus on higher-level situational awareness during the drive.
In particular, I find I can more quickly and consistently identify and avoid other distracted drivers, because I can look more closely at those features without drifting a few inches out of the middle of the lane.
Just because you are not steering does not mean you have to disengage from driving. The focus on steering as a proxy for engagement is over-simplistic. Manual steering just to lane center can lead to “blind stare” which causes many accidents, so it’s a lossy proxy for attention / focus.
As a new technology is adopted and people become less scared of the change, it eventually comes to a point where a new generation laughs at how things used to be done.
What HN consistently likes to frame as a fundamental human limitation (“driving without actively steering can never be safe”) is in fact just a faulty assumption and purely conjecture in the context of a modern car cockpit with advanced lane keeping, adaptive cruise, object detection, and driver gaze & blink tracking.
See. This is the problem. Where do you draw the line ? I can see somebody who can come and say, adaptive cruise control and lane departure assist which applies slight tourque when the car sees your drift out of lane very helpful to them.
next up, adaptive cruise control and lane centering which applies torque most of the time on straight roads but cant do curves. THis is now available most if not all car systems today , in the most basic version without any additional packages.
Then comes some smarter ones like Tesla AP or comma.ai which are pretty good at centering on straight roads, curve roads, wide lanes, lane splitting etc.
Are there cars shipping now with lane centering that can only handle perfectly straight roads? In my experience most manufacturers currently offer at least one model & trim that has lane centering & adaptive cruise that is functionally equivalent to basic AP. In some cases, e.g. SuperCruise, it's significantly better.
Yes there are some. Eg: Acura MDX. It uses the mobileye chipset and can barely make a highway turn. The car is good otherwise, but it in no way compares to the Tesla ADAS (I have daily driven both). The Tesla has done several trips to LA and back with >1hr between touches on California i5. my commute on bay area 280 feels far more natural on Autopilot when cars cut me off etc. I think performance depends significantly on what roads people drive.
I have a Toyota Rav 4 2021 with adaptive cruise control and lane assist, it worked ok on highways, especially not having it before. You have to keep nudging the wheel every 30 seconds or so which made it annoying.
I upgraded to a Comma 3 and it's basically is night and day. The car never hunts from side to side and it uses eye tracking so I don't have to touch the wheel. Again if you have to touch a wheel on a ACC system, just drive the car haha.
So you run Comma AI on a RAV4? I assume the Toyota has to have the requisite packages already optioned from the factory so the hardware is there? I'm just curious, because I'm leaning towards picking up a RAV4 Prime for my wife to use as a daily driver. I'm not 100% sure I care too much about the technology as a practical matter (I had a Tesla, I usually just drove it myself; but I like driving). But it might be fun to dork around with and see what they've built.
Yea Rav4 Hybrid 2021. afaik it doesn't work on the Prime, although you can check the car chart on the comma.ai site to be sure. I like playing with the forks every now and then too.
> Again if you have to touch a wheel on a ACC system, just drive the car haha.
I disagree with this. I use to get RSI type pain from long drives. The constant, small tweaks really took a toll on my hands/wrists/forearms.
When my car's lane centering isn't perfect, it still takes 90% of the force off. It's even more noticeable in windy conditions. Lane centering fights all of the gusts automatically.
Fair enough. I just find the steering wheel nag really really annoying. The future is eye tracking or a combination of eye and wheel nag (maybe every 5 minutes or something)
Where I live and with the kind of driving I do it actually feels safer because I get less tired on longer drives. This is long straight semi rural driving though. I don't use it on tight roads or in traffic. Be interesting to see the statistics on safety of lane assist and adaptive cruise control.
Taking human nature into account has driven so much progress in terms of road safety.
Cue corporation selling you "Full Self Driving" for 5 figures, allowing your vehicle to become autonomous. Except you have to be alert and behave as if it wasn't. At all times. Of course.
The usual FSD complaint is that Tesla has been offering it for years, at ascending prices, without any date-certain of delivering it (just tweets predicting it), without refunding it when not delivered after X years. I can easily imagine an eventual class-action suit requiring some % of refund all the way back to the first car purchased with someday-FSD (!).
If they deliver it, then (at that time, and depending on how well it works), the complaints of not-good-enough or dangerous-killing-people could come into play.
The second thing sounds even more dangerous to Tesla - smart to keep delaying (and building up that first risk higher and higher!)
And just one of the disingenuous responses from Tesla is that you could have it all now, or could have, if it wasn't for pesky regulators.
There might be an awkward silence when you ask why as recently as six months ago, it was still happily hopping the curb of roundabouts, or why even to this very day, there's no AEB capability in FSD (if your foot is on the accelerator, the car will consider that a higher priority input than an imminent collision).
I'm not sure if you're being serious, but the way "autopilot" works in a private yacht is the captain turns it on and goes below deck and gets drunk with the rest of the crew. It just maintains a heading towards a destination at that point.
You learn very quickly on the water to get out of the way of large private yachts when in open waters. There is literally no one at the helm. Even if they did notice you, they aren't in a position to halt the boats travel.
Autopilot in planes is not remotely as you described. It is used to reduce cognitive load precisely so pilots can pay more attention to higher cognitive demand tasks than maintaining straight and level flight. Such as collision avoidance, radio communication, navigation, briefing, etc.
Your point only alleges that amateur yacht drivers act irresponsibly, not that naval autopilot systems are inherently unsafe.
If by "radio comms", you mean playing a pre-recorded emergency message on a loop telling everyone to get out of the way.
It can't visually identify non-ADS-B traffic (and I'm not even sure yet if it will avoid ADS-B equipped traffic?), it can't comply with ATC clearances, it can't coordinate with other pilots in the pattern, and it will happily fly your aircraft right into potentially-fatal icing conditions. It certainly can't be used during routine flight.
Garmin Autoland is a wonderful piece of engineering, but even at best it's not a replacement for what a pilot would normally be doing to safely navigate. It's strictly there as a last-resort measure if the pilot is incapacitated.
Drivers get so much leeway that it's not even funny. Commercial air pilots get their licenses temporarily suspended if they're simply accused of being intoxicated.
Was the biker the police chief's son-in-law, or some rot like that? Did the driver have a few down at the local, but didn't quite blow a 0.08?
Most hit-and-runs don't even result in jail time[1]. But sure, I can believe that hitting and killing the wrong person may once in a blue moon result in the law coming down on you like a ton of bricks.
I read the whole article twice but didn't see anything saying that "Most hit-and-runs don't even result in jail time“.
It basically says that statistics on conviction are unknown, and sometimes people don't see jail time.
And no, the driver wasn't drinking. They were 17 at the time and went to prison instead of college. I remember it because they had a soccer scholarship. I was scared because it could have easily have been me going to prison
The extra dimensionality of roads < seas < skies help a lot. Though I suppose various standard corridors and channels collapse those dimensions somewhat.
Just want to chime in and say that I completely agree. From a technology perspective it’s interesting, from a human factors perspective it’s a complete disaster.
Automating the easiest parts of driving and just expecting a human to jump in and figure out the tricky stuff in an instant is just such a horrible design that it’s hard to see how it’s gone this far.
> I always thought the Autopilot name and the claims about having all the equipment for FSD were basically fraudulent and Tesla would be liable for false advertisement
Same. I thought they'd get slapped hard and forced to stop doing that YEARS AGO.
Instead they doubled down and started selling "full self driving"
The whole Musk saga is basically a cautionary tale for regulators, and a classic American story of regulatory capture.
The second something that's not autopilot was marketed as autopilot should have been an instant barrage from the FTC.
The second Musk committed securities fraud for "funding secured" he should have been banned from ever being an officer or director in a publicly listed company.
The second it became clear that Tesla was ignoring the requirement to have a Twitter Nanny on Musk's Twitter, the SEC should have obliterated the Tesla board.
But this is the post-Obama US, where insider trading is fine as long as it's the Right People doing it. Where bankers and investors can take obscene and reckless risk knowing the tax payer will cover their loses. Where a President's son can openly try to sell influence - who knows with what degree of success - and the intelligence agencies won't move a finger. So the Musk saga makes perfect sense.
And thank god for that. Because we have, as a result, an EV revolution, a space revolution, and a communications revolution. We can only be glad that the regulators were asked to back off what is openly a good thing for America.
This seems dishonest, as the things you mention happened before, or were in motion before, Elon went full-on nutter over the past 2 years, and you make it sound like Elon acting like a nutter is somehow a prerequisite for for these things, when they are clearly not.
I think Elon Musk is a great person to have at the helm when rapidly growing a company. He’s a visionary and he’s clearly good at taking calculated risks despite what I think about him personally. What’s not clear is whether or not he’s any good at running a large and mature business where you need to be somewhat risk averse to protect revenue streams.
I personally think Tesla would be a better company today without Elon Musk. The reality is they’re not executing on the one thing that makes them money. They’re nearly 2 years late on their pickup truck which is a license to print money in the United States. We’re JUST seeing the first deliveries of the semi but who the fuck knows when production scales. The kind of risks he’s taking (removing radar from cars, losing billions of dollars while putting up Tesla stock as collateral to buy twitter, cryptocurrency nonsense) is just not the type of thing you do when you’re the size Tesla is today.
Tesla is just insanely distracted right now shareholders are eventually going to be punished. If I only knew when, because I believe there’s money to be made.
If what you took away was that it’s fine to commit securities fraud if you also make rockets then I don’t even know what to say. A person is allowed to do good and bad things.
Musk was not facing overregulation by not having to commit fraud and misleading consumers.
This comment is highly misleading; the DoJ probe centers on fraud and "misleading consumers, investors and regulators". It has nothing whatsoever to do with manslaughter, and Reuters never even hinted at this.
That's what I thought too, but interestingly, autopilot is mere cruise control for airplanes. It can't take off or land the plane, it doesn't change course. It's definitely not "self-flying". But somehow, the term "autopilot" for cars made me think that the car would drive itself. That's an interesting twist of perception.
I'm pretty sure autopilot in planes can land them. They don't do this because they need pilots to be trained in how to land them in the event of autopilot failure. I know for a fact there are non-commercial planes, for example, that will detect pilot incapacitation, find the nearest airport, account for local weather, talk to ATC, and land the aircraft.
Autoland works. But the pilot doesn't turn it on until the tower (other other communication if no tower) says okay, and since nobody else is in the way of course it won't hit anyone.
There is a second mode in some. There it broadcasts emergency, uncontrolled plane will land, get of my way as I won't get out of yours. Useful if the pilot is dead (so a passenger who is told hit that button in an emergency can safely land), but cannot be used for anything else.
Mind linking to evidence regarding the ‘second mode’ you referred to? I have not heard of anything like that, and I am a reporter who covers transportation safety.
That's how it used to be. In the past, "autopilot" was used by laymen to describe a vehicle control system that could safely drive itself (in contrast to cruise control, which was more limited). For example, in the punchline to this chain letter from 1993:
> [A man] was driving along in his motor home. He turned on his ‘cruise control’. Apparently misunderstanding the function of ‘cruise control’, he then went into the back of the motor home. The motor home drove off the road and crashed. Apparently he did not realize that ‘cruise control’ is not ‘autopilot.’
This story is apocryphal, but it illustrates that the popular understanding of 'autopilot' for road vehicles was full self-driving.
Autopilot in planes does change course. In GA it's called GPSS steering and a lot of planes have it. I believe jet planes have more sophisticated autopilot that can change when ATC gives a new heading.
No, because "adaptive cruise control" is a different thing that all Teslas have already. It simply refers to cruise control that will slow down if the car in front of you does. All Teslas have that for free, and it's been an industry standard term for at least 20 years. Even cheap cars have adaptive cruise control these days.
The self-driving feature also steers the car, stops at signals and stop signs, merges onto the freeway, etc. Those are all things that adaptive cruise control does not do.
Here's the manual page for the feature, described as 'building on' the ACC. You have to opt in to a beta feature, so 'standard' is not strictly correct. But my Model 3 follows center lane without Autopilot.
I met someone recently who told me they SLEPT on their way to work in their Tesla. On purpose. They bought the car specifically because they had a very long 2.5 hour commute.
The marketing for this stuff is absolutely liable and as much as I like what Tesla has done for electric cars in general we are one big, massive accident away from this all absolutely blowing up.
I’ll never understand anyone that has this much faith in technology. Too many blue screens in my formative years.
Not do I understand anyone trusting their Tesla this much. Even the cruise control freaks out sometimes and slams on the brakes from phantom apparitions. I don’t even trust the base autosteer feature going down the highway.
Those blue screens occurred in dramatically less complex and better understood software, too.
I'm not saying we don't understand how FSD works, or the underlying technology. It's not a complete black box. I believe there was even a proof recently showing that these things are essentially massive decision trees and they can be deterministically unravelled.
But this is otherwise totally negligent from where I'm standing.
A neural network doesn't panic, so there is no blue screen, only bad decisions. I think that's even worse, as that would be a strong signal to take over driving.
This might be mitigated if the result includes a confidence percentage, ie panic if confidence below threshold, but even then I'm not sure how reliable this metric is.
Absolutely. I worry that the decision tree is so complex that when we do traverse it to understand errors, we could easily draw the wrong conclusions anyway.
When we need to audit something that is more complex than we can really comprehend, can we come to reliably accurate conclusions? If we need tools to do this for us, can we be certain they’re accurate as well?
All of it feels very Icarus-like. I’d be content if we tested this stuff out in isolation for another 20 years before throwing it on public streets.
So unsafe. I work with someone who’s very pro-Tesla and she let AP beta drive us around the Seattle area a few times when I visited a while ago. We got into 2 near-accidents. Firstly, the car ran over a cone on the freeway going 65mph+. Second, it almost turned into oncoming traffic out of nowhere driving in a residential/commercial area.
> I met someone recently who told me they SLEPT on their way to work in their Tesla.
He'd have to work around and bypass several safety features that prevent you from doing this. You cannot do this out of the box. You have to work hard to get it to do it.
I take any claims of people doing this with a large grain of salt.
Yeah, I was not thrilled when a friend gave me a ride in a Tesla and was nodding off as we sped down 101. I kept trying to ask him questions to keep him awake, with limited success.
I drove next to a guy on the highway that was sleeping in his tesla. his head was cocked backwards with his mouth wide open probably snoring. we were probably going 70mph.
I don’t think it has to be a bad thing. It be an interesting statistical problem. What is worse, a person who drives sleepy for 5 hours each day or a person who gives up control to a computer for 5 hours each day. Point being we are attempting to compare FSD to complete perfect driving, when the comparison should be with current drivers. If it is an improvement or reduction in crashes why not use it? Maybe it’s because we aren’t sure who to blame in case something goes wrong? I know back seat drivers that get uncomfortable as a passenger a lot even with a driver with zero accidents and would rather drive themselves, yet have crashed several times themselves at their fault. They misjudge their own abilities and the situation, a need for certain aggressive driving. I think many are misjudging the average drivers ability without self driving and we could be causing a lot of loss of life by not taking a statistically better option out of human hubris.
This is not a valid comparison, though. Rather, consider the extrapolation: "drive for 20 hours or let a computer drive for 20 hours?". (edit: the extrapolation is intended to highlight the absurdity of the premise.) The correct answer is to mitigate the risk with breaks or safer alternative transport such as a hired driver. It's not a closed system with only two possible solutions.
If the solution is not tenable, the plan should be aborted. Not rammed through via abuse of technology.
I didn’t say it was a closed system. I’m just saying one might be better than the other, I don’t know how you do that other than compare the possibilities. If everyone had a private driver then yea that options is also better but we live in a human world not a perfect world.
You implied that the only solution to an unsafe decision was a similarly unsafe decision. My argument is that the real solution would involve some other compromise. If this specific scenario was a work commute, for instance:
- move closer to work
- secure lodging to amortize the commute over more days
- arrange a carpool
- work remotely to reduce the cost of any of the above options
> I met someone recently who told me they SLEPT on their way to work in their Tesla. On purpose. They bought the car specifically because they had a very long 2.5 hour commute.
Think about that. If someone can literally sleep while the car is driving itself, and it doesn't crash, then the self-driving feature actually works really well.
No. Humans average something like 250,000 miles per insurance-worthy event. Even if we assumed a average speed of 80 mph, which is absurdly high, that is ~3,000 hours between accidents. That system would need to make the interaction-less roundtrip commute 600 times, 2.5 work years, without even a single minor accident for the unassisted system to even be comparable to the average driver.
Because it undermines the argument made in the "criminal probe". Tesla is doing self-driving as well as it can possibly be done with today's technology. If it could be done better, it would be, by one of their competitors, but they can't beat it.
It is in fact so good that people can sleep in their cars and not crash -- and I wouldn't bet that the guy who sleeps in the car only did it once!
If you are going to compare it to human drivers, it also matters which humans you compare it to. Every time I drive I see people doing things far stupider than what a Tesla would do. Driving skill, and judgment, is so unevenly distributed in the human population that it would not be difficult to find a large number of humans who underperform the Tesla system.
It would be weird to punish Tesla for not being perfect. It has delivered self driving features that are already better than a lot of the humans on the roads today, and better than all of its competitors.
Let's go a bit deeper into your argument:
> Because people buy this feature because they don't want to drive (as a human); what does it matter how it compares to other self driving systems?
Ok, pretend you are someone who is looking to buy a self-driving car because you don't want to do the driving. Wouldn't you want to buy one that works well? It's a matter of life and death, after all. So you would do some research to find out who had the best systems. If you did that you would find out that Tesla's system, despite all the criticism it gets, is one of the few on the market that is worth buying.
Any improvement _over_ 99.999% is impossible. Saying something like this is completely detached from reality. People _do_ buy cars that crash once a year. All the time. And often the defects in the system aren't even related to self driving functionality. Further, most drivers who get in a car have records far worse than that.
No, on basically all accounts. In the US there is a fatality every 75,000,000 miles, a injury every 1,250,000 miles, and a reported accident every 550,000 miles on average [1]. From a "reliability" perspective, we can reasonably assume that at any given time, you will probably crash in less than a minute without control over your car. The average driver probably averages ~40 MPH per unit time. So, on a minute basis that is one fatality per 111,000,000 minutes (99.999999%, 8 9s) a injury every 1,875,000 minutes (99.99995%, 6 9s), a accident every 315,000 minutes (99.9996%, 5 9s).
Even the worst component of the driving system (the driver) is solidly over 99.999%. And every single hardware component is vastly better than the driver. The Pinto, a classic example of a death trap, only resulted in a hardware-induced fatality something like every 1,000,000,000 miles, or using the minute basis above a fatality every 1,500,000,000 minutes (99.9999999%, 9 9s). The person you responded to is correct, 99.999% is unusably, criminally bad.
>Tesla is doing self-driving as well as it can possibly be done with today's technology. If it could be done better, it would be, by one of their competitors, but they can't beat it.
This comment is everything that's wrong with FSD and its marketing.
The bar isn't what self-driving cars can do in general (besides the fact you completely ignore that other companies are operating driverless taxis as we speak).
>It's a matter of life and death, after all.
Consider this: it's not just life or death for you. Do you understand that bystanders and other drivers don't want to be part of your experiment? The selfishness is unreal.
If you want FSD to improve then you need to allow experimentation in the real world. That means accepting risk. Our governments have wisely decided to do that and most people don’t seem to be worked up about it.
If you want to ban FSD, then go ahead and vote for a ban. But it’s not realistic to say that FSD is allowed only when it’s perfect — no technology reaches perfection without first passing through a phase of imperfection.
What I want from full self driving is something at least close to hiring a professional chauffeur. I’ve had to do this a few times in Europe. The driver pulls up in an immaculate Mercedes sedan, he drives precisely and carefully with an expert understanding of local traffic and road conditions. This is something I’ve also experienced in Tokyo in every taxi I’ve hailed on the street (except for the brand of vehicle).
Out of politeness I engage in chit-chat with the drivers, but I would be completely comfortable reading hacker news in the backseat or even sleeping. I hope self-driving reaches this level of ability soon; I’m going to be to old to drive in 15 years.
To get a perspective of what this requires, reflect carefully on what your brain is attending to while you drive. Does it ever require higher levels of reasoning than pattern matching? It does for me. I regularly encounter issues that would confuse current technology: bad fog, reckless drivers, icing on overpasses, drivers going the wrong way, a marathon blocking my route, boisterous junior high-school students walking along the curb, detours, temporary lanes inconsistent with actual markings, broken traffic lights, and damaged or vandalized traffic signs. I could sleep soundly on my way being driven by a professional driver under such circumstances.
As someone who could potentially be killed by one of Elon's homicidemobiles, I would very much prefer if vehicles that are more deadly than those driven by humans would not drive in places where they could potentially kill me.
About 40,000 people die per year in the US from traffic fatalities. There are about 200,000,000 people licensed to drive in the US. So, about 1 in 5,000 drivers die per year. Tesla has released FSD Beta to ~160,000 drivers. So, to first order, we can approximate that as ~32 Tesla drivers will die a year due to a traffic fatality if they constitute a representative sample (they do not, but we are doing some quick estimates here). So, a system that is used as a full-time, unassisted replacement for their human drivers will result in, on average, one extra person dying per year per 3% worse it is than a human driver when applied to a mere 0.1% of the US driving population. If applied over the entire US driving population a system that is 3% worse applied to all driving interactions would result in ~1,200 extra people dying per year.
At the scales being discussed, extremely small differences applied over the entire domain result in very serious risks to human life. It is not okay to knowingly sacrifice people at the altar of moving fast and breaking things so that Elon Musk can make another couple billion dollars.
Also, it is not even close to being one of the better systems from a safety perspective. It is something like 400-800x worse than Waymo and 2,000-4,000x worse than Cruise [1]. It is so far behind so many other companies it is fair to say that they are not even in the race let alone a frontrunner.
They are not within 3% or even close. They are 2,000x worse than Cruise which is still at least 8x worse than human drivers. Being generous they are 16,000x (1,600,000%) worse at unassisted, intervention-less driving than humans. To put that into perspective, if every car in the US had Tesla FSD Beta and they were all using it as a fully autonomous system without babysitting, it would average 1,750,000 deaths per day and everybody in the US would be dead in 6 months. Within a week it would kill more people than have ever died from traffic accidents in the US. We are literally talking 1 deca-Hiroshima per day of badness. The only saving grace of this whole thing is that hardly anybody is insane enough to use it unattended more than a few times so we do not see the sheer catastrophe that would occur if it was actually used as advertised.
I'm not sure flat risk per distance is the right model to use. If we consider a model like n classes of issues that all need to be handled one after another, then after most changes the incident rate will still be far above human, until it suddenly becomes much more safe. In that model, the failures come from the software failing categorically, not probabilistically.
SuperCruise and FSD currently don't even work on any of the same roads. FSD is for non-freeway roads, and SuperCruise only works on (some) freeways. Autopilot is Tesla's system that works on freeways. (Tesla does plan to merge Autopilot and FSD eventually, and SuperCruise will expand to some non-freeway roads, but neither has happened yet).
SuperCruise is only rated higher than Autopilot when people use rating systems that dock Autopilot for having fewer driver monitoring features, i.e. nanny features that scold you for not paying enough attention. I haven't seen any evidence that the actual driving part of SuperCruise or any other competitor is better than Autopilot.
> The Justice Department’s Autopilot probe is far from recommending any action partly because it is competing with two other DOJ investigations involving Tesla one of the sources said. Investigators still have much work to do and no decision on charges is imminent this source said.
This is definitely a thing that happens. If you are ever doing crimes, just be sure to commit a whole bunch of them so the DOJ has to move more slowly. It's a too-many-cooks-in-the-kitchen kind of thing.
> It's a too-many-cooks-in-the-kitchen kind of thing.
It’s actually a 5th Amendment (double-jeopardy) thing; prosecuting a subset of crimes from a single course of conduct may preclude prosecuting others depending on their relationship, so it becomes important to fully investigate before prosecution.
In this case I suspect it’s also a ‘Elon musk has too many lawyers and is acquiring a company that “buys ink by the barrel”’ problem too.
Everyone involved is going to be very careful to not end up personally targeted or liable, and that nothing done could be twisted to make themselves or the administration look like idiots.
It’s a big part of why ‘the rich don’t have consequences’, as is the current fashion to say - they can defend themselves against all but the most careful prosecutions, and can afford to hire people to make sure their asses are covered (if they think to do so).
As to if Elon actually did cover his ass here is yet to be determined. I think he probably didn’t sufficiently, but can muddy the waters enough to not go to prison, and would just have to shell out some money in a decade or so to fines or civil suits.
Of course! We all know who 'runs' those companies, who is the prominent face of those companies, who has repeatedly made very public (and often dubiously factual) pronouncements about the exact technologies and products at the center of the investigation, who has all their net worth tied up in these companies, who has been an outspoken critic of various government agencies (and relatively politically active), and also happens to be very publicly buying a hot button social media platform right now.
Watching Trump do the same, I've taken to calling it the Montgomery Burns Defense (canonically known as Three Stooges Syndrome in the Simpsons episode, apparently)
And make sure the figures involved are not below 9 digits. That way, when you get your two year (6 months with good behavior) white collar slap on the wrist, your French Chateau will still be waiting, and you can laugh at the guy who's doing a decade for bouncing a $200 check on the way out.
Musk is an outspoken critic of the current administration. Given what we've seen from the Justice Department over the last few years, does it not seem highly likely that this is politically-motivated?
Musk has actually been treated with an incredibly light touch given the number of people who have been maimed and killed by Teslas running Autopilot. If this Administration was truly antagonistic to Musk, there are dozens of things they could be doing to create problems for him and his companies. They're not.
Or the current administration is trying to have regulators and prosecutors start to go after malfeasance and Musk is criticizing the administration because he’s got a lot that he can be prosecuted for.
Isn’t it fun how wild, unsubstantiated, paranoid speculation can cut both ways?
Not so long ago, Andrej Karpathy, the ex-Director of AI at Tesla left the company.
I ask myself a couple of questions, if Tesla was to release in the near future, some game changer technology in the self-driving space and I was the face behind it, would I leave the company so other person could take the merit? Would I leave the company because I think they are going to push something that I consider dangerous to the public and I don't want to be the scapegoat?
I, as a software engineer, will never trust an autonomous vehicle that is not on rails. Even then, it better be a slow-ass train so I can jump off with minor injuries if anything goes awry.
I might trust a fully-open-sourced vehicle, produced in a manner which the software and hardware design and implementation are fully verified as error-free versus a simple-to-understand open-source model. That will never happen, though.
It's totally fine to have lines you won't cross, but as you say literally none of this is going to happen. Autonomous vehicles aren't on rails and error-free isn't a thing.
I'm also not sure how a fully open source stack would help to alleviate your safety concerns. Current vehicle stacks are impossible for a single individual to meaningfully review, however motivated. These are incredibly complicated distributed systems with dozens of processors running multiple operating systems and frameworks on top of those before you even get to application code. The safety requirements alone consume thousands of pages.
I also cannot know what’s in my computer’s Linux kernel but I feel better knowing everyone else can, and some of them can find bugs and fix them fairly easily. With closed source stuff, only the software developers at the company can see the code, which limits the eyeballs on it.
Currently the vehicle may fail but generally my brain won’t, except in certain cases. I’m fine with that, not with my car driving into a barrier it thought was a road. I probably won’t make that mistake.
I am not an engineer, but I feel the same way about payment systems. It is all so fragile when you peek behind the scenes that it is hard to justify using anything, but most basic services.
What kind of banks? Or do you mean bank as a generic name for a financial institution? Works as a function of payments? How it is intended to function vs how it actually works?
It seems like a very easy question, but it is admittedly hard for me to answer without some additional way to parse it.
If you think this is possible you must not be a very good software engineer. Maybe that was your point. If so, you should say "I accept faulty human drivers but will forever reject anything worse than absolute perfection in autonomous drivers".
I’m not a very good software developer, no, and that was my point. I’m a decent driver, but usually I farm that out to my wife who is better. Based on my phones swipe keyboard performance, I’ll say I don’t trust software developers to create a car’s brain.
So, much of this sounds like a question of just how much you can lie in your marketing material if you then take it all back in the click-through.
I'm curious how many times people have been refunded for the enhanced autopilot claiming a bait and switch. They believed the marketing materials purchased the car, and then discovered when they get it "hey not really".
But, I'm guessing what will hang them, is the question over whether people actually believed the warnings were real or just CYA, and then promptly treated it like a full "better than human" AI.
When I bought FSD it stated clearly it depended on technological and regulatory advances, so I’m not sure I get the argument that I was a victim of fraud.
Also, I haven’t seen it mentioned here, but the tech does exist and currently you have to demonstrate “safe” (according to their heuristics) driving to get access. I have access, and it more or less does the things they promised, far from perfect (in fact it makes lots of mistakes where I take over for safety), but I fail to see how they promised perfection. My interpretation of the “full” in full self driving was that it handles the full list of maneuvers they enumerated like obeying traffic control and making turns, which it does do today.
> The U.S. Department of Justice launched the previously undisclosed probe last year following more than a dozen crashes, some of them fatal, involving Tesla’s driver assistance system Autopilot, which was activated during the accidents, the people said.
I wonder how that compares to the number of accidents where BMW’s driver assistance was activated. BMW seems to be actively trying to kill me by steering into every construction marker it sees.
Honestly surprised any of this made it past the legal department at Tesla. Any company I've worked at has always been super careful and conservative with the language you use to describe anything.
"managed to randomly announce" is a very odd way of saying "the CEO committed securities fraud because bankruptcy was imminent and his entire net worth was sunk into the company".
Tesla did nothing of the sort. Elon got into trouble for influencing the Tesla share price. Which is almost comical to think about due to how common it is to do exactly that.
Elon made a mistake of giving a specific number. If he just made any run of the mill bullshit grandiose claims it would have been fine. I really have no idea why people are so caught up on this. There is massive securities fraud actually happening every day, and people are hung up on some Twitter banter from half a decade ago.
FWIW, I have no idea, I just know that Tesla's inability to keep a legally-sound lid on Elon Musk has been a recurring topic for Matt Levine for years. He got in trouble with the SEC for the take-private stuff.
Their legal team has reportedly had some issues of late: “At least a dozen company lawyers have left their posts this year, according to three people familiar with Tesla’s operations. They include some attorneys who worked in business units outside of legal, including human resources and tax, the people said. The company has no lawyer or legal chief listed among its three top executives and has been without a permanent general counsel since December 2019.”
https://news.bloomberglaw.com/business-and-practice/tesla-la...
Because the legal departments don't have veto power on anything. In-house counsel is just that - counsel, but in-house instead of some outside firm. Managers are free to ignore anything their lawyers say, just as you can ignore anything your lawyers say.
The lawyers will cover themselves by writing memos about how they "informed the client to not do thing X, but they indicated that they don't care," but because those are work product, they're confidential. They're only for future malpractice suits, should they come.
I'd phrase that as legal departments don't have veto power on anything unless management gives it to them.
If I -- Minion #64752 in BigCorp -- run my ad copy by legal and they strike it all as a legal hazard, but I run the ad anyway, my boss -- Low-Level Manager #11235 -- is going to fire me the next day, and hope that keeps his boss -- Mid-Level Manager #3142 -- from doing the same to him.
If legal doesn't have veto power on ad copy, it's because the management hasn't given it to them, or because management is making the decision to disregard legal's advice.
That’s not how “legal departments” work. You don’t have to “run it by legal” to get approval. You do that to understand potential legal risks and then you make a business decision. Legal doesn’t run the business unless you’ve explicitly set up the business that way.
Tesla likely has a complete understanding of all the possible legal outcomes. At present Tesla isn’t actually facing ANY criminal charges. They are being investigated, but that investigation has been ongoing for more than a year. This very article cites Tesla marketing and usage language which complicates any potential DOJ case.
Or actuall Tesla has a great legal department. I am definitely not happy how they pitch Autopilot/FSD, but if you check all the wording, the small print, carefully, there are no fraudulent claims.
Musk/Tesla have been selling alpha quality at best and vaporware in reality FSD for a long time. Its never ever going to work as claimed. Its crap software that endangers other people on the road and routinely kills people and all they needed to do is add a stupid disclaimer and the magic 'beta'. Every few months Musk will lie that its 'done' and out at the end of the year, with outlandish nonsense claims like his robotaxi in 2016/17, his most recent one that you wont need a human driver.
Its a massive scam and fraud thats made them trillions, and its amazing they haven't been sued for it.
I'll know when Tesla is actually taking these probes seriously when they finally take off the laughably deceptive video from https://www.tesla.com/autopilot.
Tldw it is a video that says the driver is only there for legal reasons, and it took them many takes to make the video.
That's been there for many many years and any time this topic comes up I check it and yup, still there.
While I understand the motivations for the investigation, and fully agree its an investigation worth conducting.
I dont think it actually has merit, at least not in the terms framed by the article.
In most circumstances it "probably is better than a human driver", not all circumstances, and not proven (hence probably).
But if LM can push the F35 as the best jet ever while it racks up a wreckage count to make the 737 Max jealous, claims made by Tesla arent even in the same ballpark....
It would be a real shame if we lose any chance of full self driving over a misunderstanding about the system being "not perfected yet"
I know it's on top of mind when I'm commuting in my F35. /s
Comparing a military jet which operation requires thousands of hours of training just to get off the ground compared to a mass produced vehicle is not a valid comparison, at least in my mind. Plus what LM pushes to its stockholders is definitely not as comprehensive as the Pentagon gets.
> It would be a real shame if we lose any chance of full self driving over a misunderstanding about the system being "not perfected yet"
It would be. But there's no misunderstanding here. Tesla continues to misrepresent their self driving capabilities in marketing materials, after years of feedback that tech hasn't caught up to the promises. If Tesla's profit-seeking dishonesty were to undermine the entire industry, that would be a shame.
"It would be a real shame if this snake oil were banned before we had a chance to figure out what it can cure" might be a better framing.
Can you cite an example of an obviously misleading claim in Tesla marketing materials? People keep making these claims as if they are inherent and obvious truths. From my browsing of the Tesla site it does not seem misleading at all.
> In most circumstances it "probably is better than a human driver", not all circumstances, and not proven (hence probably).
That's the core issue here, really. Is autopilot more or less safe than a human driver? If it is, then it's hard to see how there's any criminal liability here. (And "fraud" judgements over "the car doesn't really drive itself" would be limited to a refunded purchase price on vehicles that sell used higher than their sticker price).
And... is it less safe? Tesla publishes their own data saying not only is it safe, it's MUCH safer. Every time the subject comes up, people show up to whatabout and explain how that's not a good data set. But does anyone have a better one? Does the DoJ?
I was making this point last year when there were half as many Teslas on the roads: there are millions of these cars now, and every one has autopilot[1]. Any notable safety data would be glowing like hot plutonium if it existed. It probably doesn't exist. Teslas seem to be safe; they're almost certainly safer than the median vehicle.
[1] The headline obscures it, but the text makes clear the investigation is about the Autopilot product, not anything branded FSD.
Exactly why I think the investigation is justified even though I believe they are safer than cars without Teslas autopilot.
There is a risk Tesla is hiding data, if they are the investigation should surely find it - if not then those hating on the autopilot can cry into their caramel lattes while we move step by step closer to being able to nap in the backseat while travelling alone cross country.
Is there? That's a weird frame to argue in. There are existing regulations on accident statistics and reporting. Tesla does that. That they also release data split out by autopilot usage certainly can't be construed as "hiding" anything, given that (AFAIK) other manufacturers have varying levels of assisted driving technology too and don't do any public reporting at all.
There's always a risk. Of course it is possible people in Tesla have engaged in criminal behaviour. I dont think its a particularly high risk tho.
But others do, it is possible the full story isnt being told. In fact I would go as far as saying we can be sure the full story isnt being told for the simple fact that they have some evidence to base an investigation on - what is just not given in the article.
> It would be a real shame if we lose any chance of full self driving over a misunderstanding about the system being "not perfected yet"
We can still have self driving cars, but they should be developed within a culture that values safety. Tesla is not such a culture. We know this because after the first accident that resulted in decapitation, Tesla collectively shrugged and made the problem worse by removing sensors, which predictably resulted in a second decapitation. They collectively shrugged after that one as well, and again made the problem worse by removing more sensors.
Tesla does not value safety, and their YOLO attitude toward driverless cars, in which the general public is forced to participate in their beta test whether we like it or not, is holding the driverless car industry back. They are not friends of the cause, and the sooner they are prevented from running beta tests on the general public (which have caused deaths), the sooner the industry as a whole can move forward. Reckless engineering by Tesla will not result in a net gain in safety for everyone. Safety is hard even when done intentionally, it won't be achieved as a second order effect of Tesla's "move fast and break things" ethos.
3 years ago.
Its entirely capable of negotiating public roads with no user input.
Absolutely nowhere has Tesla said (as far as I have seen anyway) "Teslas can full self drive with 100% safety on public roads".
But hey, the hate on Elon Musk regardless of the facts crew is out in force. Personally I think they are worse than the believe everything Musk thinks will be ready next year crew, but neither are worth the downvotes.
This is a weird framing. Are Teslas unsafe? Either they are or they aren't, right? Are other cultures that "value" safety producing safer cars? If they're not, does that say anything about the value of "values"? What's the goal here, values or safety?
It was an expression. Certainly you agree it's quantifiable, right? (Unlike "values"). Questions of the form "are accidents, as defined this way, blah blah blah blah, more or less likely likely to occur in a Tesla than in a member of this other suitably defined vehicle cohort, blah blah blah" ... are answerable in a binary fashion. Right?
What they’re doing by removing different types of sensor is -simplifying- the Tesla system design and bringing it closer to human senses (ie eyesight alone).
Apparently Hacker News thinks humans are safer than Autopilot. So why wouldn’t we advocate a highly advanced vision-based model in cars, rather than a complex, awkwardly synchronised fusion of different classes of sensor?
Take LiDAR, for example. Some claim it’s superior to Tesla’s vision sensors. But LiDAR can’t detect colour, so how will it read traffic lights? Its model of the world will have to be synced up to a camera vision-based model of the world. Syncing two 3D (4D in fact) models precisely is a pretty tough problem to solve. Complexity becomes a risk in its own right.
Eyesight is a combination of the sensory organ and the processing that it feeds into. That is, it includes your brain. Unfortunately Tesla's "brain" is vastly more stupid than most adult humans. Extra sensory organs are a quite reasonable way to compensate for reduced cognitive capability. So, this idea of simplification seems dishonest to me.
Autopilot is a fantastic, world leading technology. But the marketing of it was (and continues to be) nothing short of criminally negligent. There are over a dozen confirmed deaths at this point, directly attributable to the outright lies spewed by Elon and co. regarding its' capabilities.
We were able to ban lawn darts in the 80s because of a few isolated incidents. But the state of affairs we are in now is only possible due to the absurd level of regulatory capture we've reached in this country. I don't expect anything to change at this point.
> There are over a dozen confirmed deaths at this point, directly attributable to the outright lies spewed by Elon and co. regarding its' capabilities.
Got a link to a list?
The few cases I've investigated turned out either not to have it enabled or were cases of serious negligence on behalf of the driver.
Also, they get a user manual they need to read/click through before they're allowed to use the autopilot.
That's not the list I've asked for. Those are deaths in car crashes that involved Teslas that were using automated systems. This does not mean the Tesla was at fault (could be other driver or Tesla driver) or that autopilot was involved (the cruise control counts as an automated system) and certainly doesn't mean that system misunderstandings caused by marketing were the cause.
I'm fairly sure the grandparent just didn't consider what the sentence constructed was actually saying and was just expressing a dislike of automated car marketing. Unfortunately, people often believe unsourced comments on hackernews to be literally true rather than sentimentally true.
> Unfortunately, people often believe unsourced comments on hackernews to be literally true rather than sentimentally true
Hi fellow watcher of the impending societal collapse. Glad to know I'm not alone. I have no answers for you but agree that this has gotten significantly worse since 2015 (although such imprecise discourse was inevitable with the creation of social media).
In particular this site discourages commenting about "did you read the article?" which is a great rule but does slowly ensure that no one in the comments has any reason to be actually informed, just plausibly informed.
While I appreciate the link, it doesn't contain numbers of the sort needed. In particular this is just deaths involving teslas with autopilot enabled, not autopilot caused deaths (particularly ones caused by misunderstanding due to marketing material).
For example, entry #259 involves a motorbike being hit by a truck and thrown at an autopiloted Tesla that was stopped in traffic. Was there a death? Yes. Was there a Tesla? Yes. Was autopilot enabled? Sure, lets say it was. Did it have anything to do with the accident? No. Was marketing material to blame? No.
This kind of shoddy, imprecise thinking is everywhere in discourse and its honestly quite worrying...
AEB, lane departure warning, blind spot detection, parking assist I all understand. I understand the "wink wink, nudge nudge" reasons to buy systems like autopilot, blue cruise, toyota lane tracing, etc. What I don't get is the by-the-books reasons to buy it. What the hell value are you getting out of autopilot if you're actually keeping your hands on the wheel and your eyes on the road? I'm not saying you shouldn't, I'm asking why the hell you're paying thousands of dollars to do it.
I totally agree! Either the car drives itself or I drive. There's no need to let a car take the wheel if you have to baby-sit it all the time, intervening when it runs off the road. It's dangerous and lulls drivers into thinking the car can drive itself.
Right. Give me "highway mode" where all the car has to do is stay in the lane and not hit the car in front me, and not hassle me about watching the road or touching the wheel so I can watch a youtube video during my thirty minute interstate ride to work. It's going to be a long damn time before I trust a car to do anything more complicated than that by itself.
I can't wait for the day that a car _can_ drive better than a human. I see it as a safety thing. We're obviously nowhere near that but one can dream of safer roads...
I've said this in the past but instead of the government just allowing auto-updates and what not, the AI needs to pass a driving test. Each version must be certified by some test. No, not the same test a human driver goes through. Something super rigourous that involves semis tipping over, bridges falling from above, babies crawling in the road, dogs, deer, rain, smoke, hail, snow, elephants, downed power lines. Maybe even a 10 year old kid running in front.
Something that all of the companies would go through, administered by the government (not waymo).
Meanwhile, 30,000 people a year will continue to die in car crashes while you strangle the entire industry in red tape. But, hey, at least you're "doing something."
If every car had Autopilot right now there would be a massive reduction in deaths. All the bad drivers, texting drivers, drunk drivers, and road rage drivers - given the option to take a break and watch some Netflix. Regardless of how imperfect FSD is, they are much much worse.
I fully agree. Everyday on my commute I see tons of people who are on their phones, and I’ve stoped counting the amount of unsafe overtakes and tailgating that I experience. I would take all the cars being driven by FSD any day of the week!
Every version of the software is tested on thousands of situations in simulation before release and has to pass all the regression tests which are constantly being added to. Surely this is much better than a super limited real world test. (It's also tested in the real world by testers before going into wider release.)
The self-driving cars report their disengagements to the DMV. The idea is simple: we don't know in what ways things can fail because this is terra incognita and so we allow certain restrictions and you be honest and we'll review if what you're doing is okay.
Because the number of variables, an experienced human driver has to be ready to make decisions that they can't specifically train for. What AI can reach that deep?
I once had a few seconds to choose between 1) a multi-vehicle collision at 60mph with oncoming vehicles in both lanes that had a steep up-hillside to their right (one was a gasoline tanker), and 2) getting my vehicle stopped on a two-foot shoulder with a long steep 45-degree down-hillside next to it (no guard rail). I would NOT have wanted computer assistance, thanks anyway.
Silly remark. The trolley problem involves the unavoidable death of at least one party - the decider is not among those who die. Not analogous.
The Uber that killed Elaine Herzberg in 2018 failed to solve the only problem it had. Did the CPU die? Did Uber or Uber's president?
My decision saved the 3 lives in the oncoming vehicles (including the idiot who decided to try passing the gasoline tanker in their rattletrap), as well as my own and those of the two additional children I eventually had (and their children).
I was going to say, the vast majority of weird near crash situations can be dealt with by slowing/stopping. That is probably not so hard to program in and not so different from what humans do.
As of a few years back, the 'FSD' Tesla had failed to detect several parked police cars, a parked firetruck, a moving semi pulling in front of it, and a concrete lane divider, all of which it struck without slowing down at all. At least two dead operators.
You're describing a utopia where outcomes are perfectly predictable, which is silly.
"And roads should be without any defect, weather should never happen, pedestrians and other drivers should always behave rationally, and black ice is not legally allowed to exist."
Exactly. The real world that real people have to drive in is far, far more complex than these sloppy machines could ever know. The map is not the territory.
I've said it before, and I'll say it again, Tesla the electric car company is doing great. Tesla the FSD tech unicorn, is going to end in tears. I will not be surprised at all when some of the engineers on the AI team go to jail over this. Why the engineers? Because the executives almost always are able to afford high powered lawyers to avoid any serious consequences.
This investigation is about statements made about the car's capabilities, not an issue with the capability itself. This is squarely on their marketing and executive teams.
In fact, some of their other departments are notable for contradicting the lofty claims of their marketing.
When the shit starts flying, the execs and marketing team are absolutely going to throw the engineers under the bus. "We were just repeating what we were told by the engineers." "We didn't know any better."
It happens every time. The best defense for the engineers is to have thoroughly documented the limitations of the system.
All of those limitations are documented and are even published in the owners manual, among other things. "Nobody told us" doesn't hold water in this scenario. Musk knows exactly what his car can and can't do, but he sells it with misleading marketing anyway.
There is a character that is a wonderful example in this category in "Going Postal" by Terry Pratchett - I'd suggest reading the whole book because it's fantastic but in particular Mr. Pony is the epitome of the pressed on engineer
> Pony looked around, a hunted man. He’d got his pink carbon copies, and they would show everyone that he was nothing more than a man who’d tried to make things work, but right now all he could find on his side was the truth. He took refuge in it.
This is an outlandish claim and needless fearmongering. Why stop at Tesla, should Meta/Twitter engineers also go to jail for their role in building platforms that can be used to incite violence? Boeing engineers for the 737 Max? Toyota engineers for the stuck accelerator issues?
Is a year-and-a-half that long for such a high-profile case (potentially with co-conspirators and other related cases going on?) Also, sentences have been doled out. They aren't being Gitmo'd.
> Everyone loved Musk, Trump, Oz, etc before they became politicians.
Musk has been controversial since the hyperloop, and certainly plenty of people hated him for many years for a variety of reasons. I could recap, but that's been out for years so you can probably fill them all in. Which is my point.
I have no idea who you thought loved Oz or Trump. Oz peddled scam cures. Trump was a failed real estate baron/casino operator/etc who played less of a failure on TV.
And Oz and Trump had failing TV shows. Oz was down 17% YoY right before he announced, putting him in 10th in the timeslot and Trump was also down like 17% YoY for a few years in a ro) before they hopped into politics. But even the people impressed by "can get on TV" faded on them.
I do feel that the Republican voting population and the audience for those shows had a significant overlap. Leading people likely to vote for them to think there was a sudden shift.
Really, it's the shareholders that should go to jail over this for threatening to short Tesla stock if they don't deliver at every quarterly earnings. They are the ultimate root cause of Tesla's behavior around FSD.
If shareholders collectively promise and agree to hold Tesla stock and upvote it for a 5-10 year period it would give Tesla the space and time they need to develop FSD safely.
As long as they're at the mercy of ruthless quarterly free market capitalism, safety will go to shit. Same thing with Boeing.
You can't be serious. The shareholders demand returns, but the EV side of the company already has been successful enough to keep them placated. No one forced Musk to go out there and repeatedly claim that Telsa's would be able to be used as robotaxis, or that you'd be able to drive across country with no hands. He's still out there making these outlandish claims! At some point, the music is going to stop.
> This is the problem. Shareholders demand short term returns, not a long term safe/sustainable future and long term returns.
So what? There's nothing wrong about that. Shareholders demand both long term stability and short term returns btw - if the market thought there are no long term returns to be had, the stock price would've crashed.
Shareholders don't have much say about how Tesla does it, and they're actually the ones defrauded.
The shareholders can demand whatever they want including shiny unicorns. Musk and Tesla are entitled to say "No".
When Tim Cook was asked pointed questions on why Apple gave a crap about environmental concerns, rather than focussing on pure profit über-alles, he smacked down the questioner [1]
"If you want me to do things only for ROI reasons, you should get out of this stock.”
A company's decisions can be guided by shareholders, but it's on the company to forge their own path. If the shareholders don't like it, they'll vote with their feet. Promising unicorns to keep the feet planted where they are is not a good strategy.
- "The Justice Department’s Autopilot probe is far from recommending any action partly because it is competing with two other DOJ investigations involving Tesla, one of the sources said. Investigators still have much work to do and no decision on charges is imminent, this source said."
- "The Justice Department may also face challenges in building its case, said the sources, because of Tesla’s warnings about overreliance on Autopilot."
How often do criminal defendants get to read the internal deliberations of their prosecuting office, and before they're even charged?
What a dysfunctional mess. This leak was unethical, unlawful, and unworthy of the gravitas of an office whose work puts people in prison.
If I search the NYT from 1990 to 2010 for "DOJ" "charges imminent", I get >2000 results. There's nothing happening with respect to Tesla here that doesn't happen all the time. DOJ is a big organization; it has 100,000 employees. We just notice this stuff when it intersects people and companies we have a rooting interest (one way or the other) in.
> How often do criminal defendants get to read the internal deliberations of their prosecuting office, and before they're even charged?
There's really no reason this info should be secret. If you can provide extra info to persuade them not to charge you, then all the better - time saved for both parties.
It's entirely possible DOJ decided this was a good time to leak this information. For instance, just spitballing, they might be saying, "hey, we're going to bring charges, but not for a while, and we want to tell investors ahead of time so that the news causes less volatility, and the volatility will be better isolated to just Tesla."
No real way to know from the outside, unless they bring charges against a leaker.
This is after last week Reuters claimed that Elon Musk was under a federal investigation, which the White House later denied. (Almost no one reported the denial.) I'd wait for more information before jumping to conclusions.
Between the regular pro-Russian reporting and this and other issues I've found Reuters to be a highly unreliable source recently. They're very hit and miss.
Important also to look at the stock market. There's zero blip on the stock ticker from this news, which means that basically no one who owns decent amounts of TSLA stock care about this. Which generally means that they know something that's not in this article.
I don't think the White House denied that Elon Musk was under investigation. It would be...completely inappropriate for the White House to comment on specific investigations from the FBI or DOJ.
Rather, the White House denied that there was a national security review of Elon Musk's projects. That's important, but very distinct from a denial that there are no investigations of Elon Musk. It's also a subject matter that is much more appropriate for the White House to comment on.
That statement is about a national security review. What Reuters report do you think this disputes? Reuters reported on an investigation claim made by others in court:
"There's a lot of interest in this. We heard those reportings, those reportings are not true, so we'll leave that there. The national security review, that is not true and I really don't have more to say on that piece, on Elon Musk and what he's choosing to do and not to do, I'm not going to say more from here".
Which... sounds exactly like what the person you are replying to was saying.
"WILMINGTON, Del., Oct 13 (Reuters) - Elon Musk is being investigated by federal authorities over his conduct in his $44 billion takeover deal for Twitter Inc (TWTR.N), the social media company said in a court filing released on Thursday.
While the filing said he was under investigations, it did not say what the exact focus of the probes was and which federal authorities are conducting them."
They are reporting what the court filing said, what are they supposed to do?
> They are reporting what the court filing said, what are they supposed to do?
Maybe put a modicum of effort into verifying comments put forward by an antagonist in a court case? Lawyers regularly lie or twist the truth in an attempt to push their case forward.
Musk is buying Twitter today or tomorrow, and he'll likely need to sell more Tesla stock unless he can get additional financing. This news will obviously depress Tesla stock. The timing is hardly inconspicuous.
My jaded view is that companies this size routinely commit criminal acts but enforcement is the selective result of political relationships in DC.
There’s obviously some spat going on between Musk and the Biden administration that the public has limited insight into. Suddenly Biden’s DOJ is going after Musk… too much of a coincidence for me.
Musk has been fairly open on the spat. Basically Biden is funded by the unions and promotes unionised car companies like Ford/GM over Tesla. I doubt the DOJ thing is part of it.
Back during the summer, and friend of mine’s son was killed in an accident involving a Tesla on autopilot. I can’t help but feel like the oversell of Autopilot’s capabilities is just Elon Musk feeding his own ego.
What is the government’s minimum standard for a car autopilot to be sold as an “autopilot”?
Air autopilots rely on a pilot being present, observant, and able to take over if necessary. Air autopilots are not infallible.
How could the government come up with benchmark standards for car autopilots when Tesla (and others) are actively inventing the technology?
How could Tesla realistically train its ML models if not for the billions of miles of data it collects from Tesla owners?
How much damage will this litigation do to American autonomy R&D?
How much of the recent Musk hatred (by institutions such as the media, and large elements of the public) is due to Musk re-aligning himself politically against the Democrats and Big Tech?
> What is the government’s minimum standard for a car autopilot to be sold as an “autopilot”?
California seems to be using the Society of Automotive Engineer's standard here, per the forms for applying for an AV license.
> Air autopilots rely on a pilot being present, observant, and able to take over if necessary. Air autopilots are not infallible.
Air autopilots aren't really comparable. First and foremost, pilots require far more rigorous training than drivers, and are unlikely to be wholly unaware of the limitations of autopilot. It's also almost unanimously agreed that air autopilot is safer than manual flying under some conditions. There's some debate over how heavily it should be used, but I can't find anyone saying it shouldn't be used at all.
> How could Tesla realistically train its ML models if not for the billions of miles of data it collects from Tesla owners?
Tesla would pay employees, who are well-versed in the risks, to monitor the vehicles while they drive. The same thing that other self-driving companies do.
> How much damage will this litigation do to American autonomy R&D?
Probably very little. Tesla is the only one selling to consumers, so they're really the only one in the space that could be at risk from this lawsuit.
> How much of the recent Musk hatred (by institutions such as the media, and large elements of the public) is due to Musk re-aligning himself politically against the Democrats and Big Tech?
I won't discount it as a possibility, but Musk has had a lot of detractors for a long time. It was bound to happen at some point; you can only overpromise and underdeliver for so long before people start to notice.
The Biden admin has clearly signalled that it resents Musk personally due to his political views, but this probe was launched last year, and is completely unrelated to that.
No. Probes are easy to launch; all strategically important companies, people, and even local authorities have a list of associated ”probes” either active & simmering (like Tesla) or pending with the probable cause for launching a probe in a database somewhere.
Usually, it will result in some sort of cost-of-doing-business fine or not be prioritized at all.
Then, occasionally, someone gets uppity and the simmering pot gets the gas turned up.
I do point out here that the timing of this leak (which clearly came from the DoJ itself) is very suspect, given that it could have depressed Tesla's stock price, and it came right before Musk is expected to need to sell a little more $TSLA to finance the Twitter deal: https://news.ycombinator.com/item?id=33354143
Are you aware of how many government grants and tax benefits Tesla/SpaceX have collected[1]? They're as much "government money sifting leeches" as the rest.
That article is from 2015, firstly, and second if you actually look into what the subsidies are for, they're almost all for things that any manufacturer can access and are not Tesla specific. Are you against any subsidizing of green technologies?
> Everything Tesla/SpaceX has done has been: 1)Light years above the competition 2)Done with less employees and less money 3)Completed in a timespan that makes companies like
We're discussing Tesla's self driving system. It's certainly not light years above the competition, as it doesn't even use state of the art sensors for safety like LiDAR, which has predictably resulted in multiple Teslas killing their drivers. Maybe they've built what they have with fewer employees and less money, but I don't think that's a win given the deaths. They probably should have used more money and employees to prevent that kind of thing. As for completing it in a timespan, I mean... they're nowhere near done and have been saying for years it'll be ready any day now, yet are still happily accepting money for their promises. That's pretty much the point of all this.
Tesla is a lightning rod. Everyone wants a piece of it. They are held to a much higher standard and are in the media microscope. Tesla and Elon generates clicks and discussion - there's nothing like it.
>the people familiar with the inquiry said
>the sources said
>they said.
>one of the sources said
>this source said
These articles with no named sources are so tiring. This one didn't even bother to tell us why the sources are anonymous. Hard news and serious journalism is dead and buried. Did this reporter even have to leave their figurative basement to write this story?
If reporters name their sources on stories like this, then suddenly they don't have sources anymore, and the media becomes, well, pretty much a system to regurgitate press releases. Like, this is how it has always worked.
I don't understand why Elon claiming that the cars will be able to drive themselves in the future could be considered any kind of crime? I don't think Tesla has ever said "hey you can get in the car today and it is fully autonomous". They've just said that's what they will do in the future. They've also shown and talked about products in-development.
You might think they are full of shit about whether it will ever be delivered, which is a fair argument, but that's much different than lying about the actual product in people's hands.
If Tesla knowingly makes material false claims about what their cars can do, especially if they have internal evidence that people are using the feature dangerously in ways Tesla could have prevented, they're perpetrating fraud.
Most frauds are prosecuted in state court. There are several federal fraud statutes. For instance, wire fraud, which is any interstate fraud that uses telecommunications, has the following (paraphrased) predicates in the model jury instructions:
(1) You knowingly took part in a scheme to deceive people.
(2) The lies you told were material and caused people to spend money or give up property.
(3) You had an intent to cheat people out of money.
(4) You used some form of interstate telecommunications as part of the scheme.
Prosecutors don't have to prove that Tesla says "the car is fully autonomous today". They just have to prove that Tesla made statements that it knew weren't true, in order to get people to buy cars or Tesla stock. Those statements can be much narrower than "we have achieved full FSD", so long as they are material: that is, so long as they're significant enough to influence people's purchasing decisions.
Sure, I just wonder if there actually are any such statements. Seems like usually people point to statements that are Elon tweeting "I think it might be ready next year".
He's not just a random speculator tweeting on the internet. He acts as a spokesperson for the company using his personal twitter account. He can't* merely escape liability by posting shitmemes in between PR statements & baiting investors.
* I mean... if you or I were to operate a company like that, we'd get destroyed, anyway. Whether or not Musk is too rich to face any real consequences for, say, shooting somebody on 5th ave, is an open question.
Moreover, when I was promoted to Staff Engineer at a certain company, I had to go through training to drive home the point that if I spoke in public, about company business, I was considered a representative of the company and my statements could lead to legal action against the company because my position was considered a Management-level role.
Whether or not he faces any real consequences (unlikely), it will be interesting to see what impact it has on the corporation.
> He acts as a spokesperson for the company using his personal twitter account.
Tesla has made corporate filings that @elonmusk is an official communications channel of the company. That's why he was required to have a lawyer vet all his Twitter posts. Even though he plainly and openly ignored that.
The article quotes Elon Musk as having said "Like we’re not saying that that’s quite ready to have no one behind the wheel". That statement is presented as exculpatory for Tesla, despite the fact that there's miles of safety margin between "nobody needs to be behind the wheel" and "autopilot is safe to rely on as advertised in all circumstances". If that's what's getting Tesla off the hook, chances are, they've got substantially worse things in their files.
The other thing is, the messages Tesla sends to the market can be contradictory; prosecutors will home in on the least responsible things they say, and it'll be up to the defense to establish "no, what we really meant, and what every reasonable person took away from what we said, was this banal statement we made in the manual for the car". That'll be tricky, because people have obviously been killed by Tesla's "autopilot" feature, which they were dumb enough to name "autopilot".
Who knows if there's a real case here, though? There may not be!
What I'm saying is that I don't actually think Tesla or Elon has suggested you use Autopilot (or Full-Self Driving Beta) in any way other than the approved way.
Plus, when enabling either feature (which are opt-in) a very clear warning is displayed which you must agree to.
So I just don't know who is like, "well, I bought the thing and read the warning and agreed to it but just assumed it was fully autonomous because it's called Autopilot and therefore it is reasonable for me not to pay attention".
> What I'm saying is that I don't actually think Tesla or Elon has suggested you use Autopilot (or Full-Self Driving Beta) in any way other than the approved way.
Tesla does that quite regularly.
Summon - "bring your car to you while you deal with a fussing child" (fine print buried far away, "pay complete attention to car").
FSD - "the driver is only in the seat for legal purposes - the car is driving itself".
I'm not sure about your first quote (the only google results were for this comment), but the second one was a demonstration of a feature that was not and is not available!
OK, so what you said is wrong, since the Elon quote is:
> [Smart Summon is] the perfect feature to use if you have an overflowing shopping cart, are dealing with a fussy child, or simply don’t want to walk to your car through the rain. … [But] those using Smart Summon must remain responsible for the car and monitor it and its surroundings at all times.
So he explicitly explains that you have to monitor it, and does not bury it in the fine print as you claim. Hardly a smoking gun, even if you really are contorting yourself to find something objectionable in this quite.
"Very clear warnings" in manuals and in car displays aren't a get-out-of-jail-free card. If Tesla never made any communications that contradicted those displays, they'll probably be fine, at least with respect to FSD safety.
I don't know what the standards are in the automotive field, but in medical devices labeling (including product manuals) is considered the absolute last resort when attempting to mitigate a hazard. The odds of the FDA or any other regulatory body letting you off the hook after your device caused an Injury to Patient or loss of life because "we said in the manual to not do that" is effectively zero.
It is a crime because he is collecting money for it now.
At the minimum it is a violation of the "30 day rule." If you accept money for a preorder of something, you must clearly state a delivery date. If you do not, it is assumed within 30 days of an order.
Tesla can face FTC fines of $16,000 per Autopilot sale, in addition to lawsuits from consumer protection agencies in every state where a vehicle was sold with the undelivered feature.
This is why all the crowdfunding platforms are basically donations with "rewards" of products and not preorders.
This is interesting and very specific. I would think that if this applied it would be quite open-and-shut, and handled by the FTC directly. So my guess is it does not apply. Perhaps because it is not "merchandise"?
I believe the lack of enforcement action is because FTC is understaffed for the amount of stuff under their purview, combined with the fact that not enough defrauded consumers know they need to file complaints.
The probe isn't about the future "full self driving" product.
It's about the current "autopilot" product, or more importantly, the marketing around it. They are investigating if Tesla deliberately oversold the capabilities of autopilot, implying that can do far more than it actually can.
Basically: are Tesla criminally responsible for customers who misunderstood what autopilot was, and then didn't correctly supervise the autopilot and ended up in accidents?
Though the fact that Tesla/Musk were continually talking about the FSD features will play into the probe, but only because customers might have confused things said about the future FSD feature, for the capabilities of the current system. But only if Tesla deliberately encouraged or knew about this confusion (or should have known)
Their marketing has lots of videos where they tout self-driving (calling it full self-driving or Autopilot) without any clarification it is a future promise. And then there is a feature on your Tesla called Autopilot you can click one
I know if you are in the weeds you know the capabilities - but I think most people get the impression it can drive itself and its safe. And that impression is a direct result from marketing materials. Given the severity of car accidents it seems reasonable to be strict on these marketing claims, even if they call it a "beta" in the fine-print.
> without any clarification it is a future promise
and even then, oftentimes it's -very- strongly implied that if it's not really available now the only thing holding it back is regulators.
(Well, that's actually true in its own way, though not a good thing - I am certain that Elon would be yeeting all sorts of shit over the fence if the regulators let him, if only so Tesla could realize all manner of deferred income.)
It's the claims he makes about the present that are a problem. Also, when he's selling "beta" access with the expectation that "release" access will be more expensive and a neverending promise of "this year" and "next year", that sounds like reasonably clear-cut fraud to me.
I am not sure there are any actual claims about the present that would lead a reasonable person to conclude their car currently has features that it does not.
I don't think it's anywhere near clear-cut fraud to say "you can pre-order this for a cheaper price" and to be wrong about your deadline estimates. Yes, if it never comes out or there is evidence they never even tried to make good on the offer, then you would be open to some kind of legal action. But clearly they are trying. If a bunch of owners want to try to argue that the timeline was promised and missed and do a class action to get their money back, that seems reasonable enough. But none of this seems to relate to actual safety issues or anything.
See my comment upthread. If you accept a preorder you must offer a firm delivery date. If you cannot make that date, your only option under the law is a refund. You can't push the date back.
How do Kickstarter and early access games sold on Steam function in that magic universe where "if you accept a preorder, you must offer a firm delivery date"?
As far as I can tell, there aren't droves of Kickstarter campaigns that keep getting sued or prosecuted on regular just for perpetually delaying their delivery dates (often for years).
Kickstarter and other crowdfunding platforms are framed as donations.
Steam Early Access allows you to request a full refund at any point up until the actual launch of the game (and then 14 days after based on the standard refund policy), even if you have played it extensively.
I think you're misinterpreting "you can request a refund at any time prior to release" – the release date is the date it comes out on Early Access, not any potential 1.0.
And re: kickstarter, I think "pledge" is in relation to the fact that you don't have to pay unless the project gets over it's goal. I remain very skeptical that pre-orders are as legally shaky as you claim.
They don't, just like the non-released FSD doesn't, because it isn't available yet.
The original claim I replied to was talking about the possibility of Tesla getting sued for delaying multiple times the release of a product that's in active development (and thus being unavailable). I don't see how the discussion about a product that the public has no access to "killing people" is relevant at all.
You can read the DMV's complaint at [1] which is kind of the prior art here; I haven't found a filing for criminal charges yet. It cites 4 things from the Tesla website:
1) Usage of the term "Autopilot"
2) Usage of the term "Full Self-Driving Capability"
3) The phrase: "The system is designed to be able to conduct short and long-distance trips with no action required by the person in the driver's seat."
4) The statement: "From Home - All you will need to do is get in and tell your car where to go. If you don't say anything, your car will look at your calendar and take you there as the assumed destination. Your Tesla will figure out the optimal route, navigating urban steets, complex intersections and freeways. To your Destination - When you arrive at your destination, simply step out at the entrance and your car will enter park seek mode, automatically search for a spot and park itself. A tap on your phone summons it back to you."
3 is probably the most damning, because it specifically promises features that do not work, and could not legally be used in that way even if they did without further licensing.
They allowed you to buy the "car can drive itself" capability and never fully delivered. It's not like this was just a promise, it was something you could pay for years ago.
Yeah, but that isn't a crime. You can pre-order things. They still claim they will deliver it. Seems like maybe you could do a class-action or something to get your money back, but not argue that actually it was reasonable to believe you already had it and therefore are not liable for a crash or whatever.
> Yeah, but that isn't a crime. You can pre-order things.
It actually is. When accepting a preorder you have to provide a fixed delivery date, or it is assumed to be no later than 30 days after sale (or 50 days if you offer in-house financing). The fraud case here is actually very straightforward.
IANAL but I think you could call it fraud if you never had any intention of delivering. E.g. if I kickstart a perpetual motion machine and then move to the Bahamas with the funds.
What's ironic is that despite his image as a genius no one ever holds Elon to account for his failure to accurately predict the state/progress of his own technology. He's either not as smart as people think or he's knowingly lying, but I think you have to pick one.
Would we try to stop it?
A lot of companies would like to stop it, that’s for sure.
There no doubt that a computer will match a human, than exceed its capability, 24x7, never tired, never distracted.
For the moment, every drive has to acknowledge that they are responsible, need to keep the control and have to stay vigilant every time they use self-driving. It seems to me that take precedence to the product brochure or random comment in a tweet a few years ago.
Of course this comment is going to be heavily downvote like everything positive about Tesla here. I wonder why?
My point is that the tech is improving, we can aim for maximum safety and regulations but at the end would it save live or destroy more with the status quo.
There is a cost both ways, we are used to pay the cost of the status quo, it feel easier to push back against progress.
Furthermore to me the issue is not safety vs not-safety, but the advertisement is clearly misleading at least
>For the moment, every drive has to acknowledge that they are responsible, need to keep the control and have to stay vigilant every time they use self-driving. It seems to me that take precedence to the product brochure...
Maybe we should hold the company to make sure the product brochure is truthful. Because if people trust the brochure they might die over it.
"Current Autopilot features require active driver supervision and do not make the vehicle autonomous."
It’s clear they are working toward full self driving but it’s not there yet.
"The future use of these features without supervision is dependent on achieving reliability far in excess of human drivers as demonstrated by billions of miles of experience, as well as regulatory approval, which may take longer in some jurisdictions. As these self-driving capabilities are introduced, your car will be continuously upgraded through over-the-air software updates."
> What if Tesla are safer than a regular driver? What if they actually saved lives? There is plenty of video evidence of that
"video evidence" is not the kind of evidence which can demonstrate that. The kind of evidence which could demonstrate that Teslas are safer than a regular driver would be statistics. Statistics which by the way is available to Tesla, and they could let researchers scrutinise if they choose to do so.