Hacker News new | past | comments | ask | show | jobs | submit login

Lets see some of the quotes and think if we could get away with it...

Aug 7, 2018 "Am considering taking Tesla private at $420. Funding secured." Elon Musk

Aug 7, 2018 "Shareholders could either to sell at 420 or hold shares & go private"

After paying 20 million dollar fine referring to the Securities and Exchange Commission tweets:

"Just want to that the Shortseller Enrichment Commission is doing incredible work," "And the name change is so on point!"

2018

"I want to be clear: I do not respect the SEC, I do not respect them,"

2019

“By the middle of next year, we’ll have over a million Tesla cars on the road with full self-driving hardware, feature complete, at a reliability level that we would consider that no one needs to pay attention,”

October 21, 2019

"Next year for sure, we will have over a million robotaxis on the road," "The fleet wakes up with an over-the-air update. That's all it takes."

2020

"All Tesla cars right now have everything necessary for self-driving available today. All you need to do is improve the software."




The SEC tweets are of course much easier for us to get away with. The only reason they’re of any interest at all is that Elon is rich and powerful.


What's wrong with the latest statement?

Technically all cars have a proper GPU (Tesla's "GPU") and 360 cameras, the reliability is probably in the 90%-99% (which, yes, is pretty terrible to be considered fully autonomous, but impressive from a technical standpoint). Technically is correct to say that all they have to do is to "improve the software". That will be a lot of work, sweat and tears, but still correct.


Since the fall of 2016, Tesla's website has said:

> "All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver."

Yet the original "HW1" computer was replaced by "HW2", which was replaced by "FSD Computer 2.5", which was replaced by "FSD Computer 3".

If you want to activate the Full Self-Driving package today on a Tesla you just bought in early 2019, they'll tell you that your "FSD Computer 2.5" is not capable enough for even today's level 2 autonomy features, you first have to replace it with "FSD Computer 3". And if your car's a bit older, but still sold as "has the hardware needed for full self-driving", you have to pay $1000 for that upgrade before you can pay for the "full self-driving" that isn't self-driving.

How many times does Tesla have to claim the hardware is good enough, but then change their minds a year later, before any reasonable person stops believing them?


Had the GP quoted Elon saying this in fall 2016 then you would have an entirely valid point. But the GP did not do this. The GP dated the quote as being made in 2020, when all new cars did in fact contain HW3.


The quote is not "all Tesla cars have HW3", it's "all Tesla cars right now have everything necessary for self-driving". He said that same thing about HW2 and about HW2.5. There are no self-driving Tesla robotaxis running any computer version. It was as wrong to say it in 2020 as it was in 2016, 2017, 2018 and 2019. Maybe at next month's "Tesla AI Day" (not to be confused with 2019's "Tesla Autonomy Day") he'll debut HW4 which will definitely be the one that turns your car into a robotaxi.


You appear to be misquoting. Can you please cite this 2020 quote verbatim and in context? If you can, and the context is consistent with what you have inferred, I will retract everything and offer my most generous and profuse apology.


The quote is from further up this thread we're in:

https://news.ycombinator.com/item?id=28002731

It's verbatim from Elon Musk on Autonomy Day 2019, per this article:

https://www.engadget.com/2019-04-22-tesla-elon-musk-self-dri...

This is the same event where he promised a million Tesla robotaxis on the road by 2020.


> It's verbatim from Elon Musk on Autonomy Day 2019

No, that was NOT a verbatim quote. Here is the exact quote with me adding emphasis on the critical (but oft omitted) qualifier:

"All Teslas being produced right now have this computer. We switched over from the Nvidia solution for S and X about a month ago, and we switched over the Model 3 about ten days ago. All cars being produced have the all the hardware necessary, compute and otherwise, for full self-driving. I'll say that again. All Tesla cars being produced right now have everything necessary for full self-driving. All you need to do is improve the software."

https://youtu.be/Ucp0TTmvqOE?t=5663

> This is the same event where he promised a million Tesla robotaxis on the road by 2020.

That is, in my opinion, not a contextually correct reading of what Elon said. Within the context of the speech it's clear (to me) that for him, "robotaxi" is used as shorthand for "vehicles with the hardware capable of being a robotaxi." Obviously I cannot prove this, and I recognise that responsibility ultimately lies with Elon to be clear.

I do agree Elon did say multiple sentences that are absolutely misleading when read in isolation, but his actual words in context (and with some recognition that this is a forward looking statement made with optimism) were far less egregiously wrong than the snipped quote implies on its own. You can watch him say it in context and make up your own mind:

https://youtu.be/Ucp0TTmvqOE?t=11612


The additional context around the same quote doesn't change anything as far as I can tell. They said the exact same thing about their previous computer systems, then just 1-2 years later would replace those computer systems because they were in fact not capable enough for self-driving. There's still no self-driving robotaxis on HW3 over two years later, when they promised it would exist in 2020.

Same thing with the robotaxi quotes. Elon was very clear on autonomy day and the days surrounding it. The promise was not shorthand for "vehicles capable of being a robotaxi", it was Tesla vehicles, and I quote again, "operating robotaxis next year with no one in them", and "I feel very confident predicting that there will be autonomous robotaxis from Tesla next year — not in all jurisdictions because we won’t have regulatory approval everywhere.". You don't need regulatory approval for hardware alone.

In another investor presentation Elon Musk made in 2019, per a reporter covering it, they were promising the Tesla Network to allow Tesla owners to send their functioning robotaxis out to pick people up for money would launch in 2020.

> Starting next year, owners will be able to flip a switch inside the Tesla app, and send out their car to pick up and drop off passengers autonomously, earning an estimated 65 cents per mile in fares. By Tesla’s estimates, owners might be able to earn $30,000 in gross revenue from their cars per year, or more than $300,000 in revenue over the 11-year lifespan of their car.


That’s a whole bunch of stuff I don’t necessarily disagree with, I was only responding to specific points. I am not here to defend Elon’s entire verbal history.

Not included in your response, I should note, was any acknowledgement that you misrepresented a quote as being verbatim was in fact not verbatim. This makes me entirely disinterested in further interaction.


Replacing hardware does not make old hardware incapable of FSD.

To give you an example, ImageNet for image classification required for many years a descent computer and GPU to run inference. Today, an improved version called MobileNet, runs in phones with no GPU from advancements in AI training with half-precision and why not.

So, how can we be so sure that FSD can't EVER be ported back to HW1?


That's some pretty impressive goal post moving there.

It doesn't matter whether it can't ever be ported back. It doesn't work today, and that's all there is to it. Musk lied. And it isn't the first time he does so either.


That's a bit harsh. Many technology products have delays, it sucks, makes customers unhappy, but is not uncommon and is not considered lying, not in my book at least. Windows Vista was delayed for many many years [1]. They actually promised a full database replacement to their file system that it never really shipped, etc. This is not to criticize them, but just to say that when ambitious goals are set, it's often the case that delays happen. You can definitely make the case that maybe it's better to release products Apple-style and just unveil them when they are ready, that would be a fair point.

[1] https://en.wikipedia.org/wiki/Development_of_Windows_Vista


It's not harsh. He sold cars claiming they can do FSD in the near future. He claimed he will have a million robotaxis in 2020. He claimed Teslas will pay for themselves in a year's time working as taxis for you (a bold faced lie even if FSD were here today).

Windows Vista and software in general are not even remotely comparable: they are not sold today with the promise of features tomorrow - they are generally sold when they're ready. Even pre-order games are usually refunded if they incur too huge delays or ship with significantly altered features (see Cyberpunk 2020).


Although, most V1 software is released early, as an MVP (Most Viable Product), get a lot of feedback and iterate. So it's a very reasonable approach to release FSD incrementally and get early adoption and early feedback. Again, it's a pretty hard problem to solve; so even if they fail, lots of respect from me to their team for getting this far.


What will it take for you to admit that it doesn't exist?

Just like you can't be a little bit pregnant you can't have a little bit of FSD. You either have it or you don't. It's a boolean, and it's embedded in that word 'full'.


> MobileNet

Okay, compare accuracy of MobileNet to ViT. ViT has 90.45% accuracy and 1.843Bn parameters and MobileNet has 79% with 5.47M parameters (2020, v3 is 75% with 5.4). [0]

These two networks are nowhere near equal. If you could put ViT on a phone and run real-time inference you would. But you can't, so you take a major cut in performance to get what you can.

There are multiple metrics you need to look at when evaluating networks. "Being able to perform on a dataset" is not even that useful of one.

[0] Have fun https://paperswithcode.com/sota/image-classification-on-imag...


>So, how can we be so sure that FSD can't EVER be ported back to HW1?

Are you saying that the possibility is enough to prove they didn't cheat people?

If, some day, Tesla hypothetically says "we have determined it is technically possible to backport FSD, but we aren't gonna, because we like making money, not spending it" is that enough in your mind to demonstrate they didn't defraud anyone way back when?

Or do they actually have to make it run to fulfill the original promise?


The possibility is not enough. They have to make FSD work in older Teslas to fulfill the original promise, somehow.

That said, if it doesn't happened this year, it doesn't mean it will never happen.

The first problem they need to solve is FSD. If they can't solve FSD even with latest hardware, then well, that would be bad on its own.

The question is, what happens once they solve FSD, do they port? Maybe, I don't think it will be nearly as hard. Now realistically, it's probably cheaper to give free upgrades to everyone and not have to do the port. To me that feels fair, is like "hey, I promised you FSD on HW1, is gonna be a pain to port, but here, take this free HW upgrade", I'd be OK with that.

Now if they don't compensate HW1 users to some degree once FSD is solved, I do feel that would not be cool.

But my point here is that, we are jumping to conclusions, we don't know yet. We can't claim (yet) that FSD will or will not happen.


Because the claim can't be backed by evidence. There's still a camera alone vs camera +lidar debate. But besides that it's not hard to imagine that the neural net needed to process whatever vision you need isn't powerful enough, so you need next generation GPUs/TPUs/whatever-Us. We just don't know until it's been done. But we know that the more frames you can process per second the better. We do know that the more resolution an image has the more information it has and the higher potential accuracy a network has (if you're following ViTs you'll notice people are upping the image resolution and getting better classification results).

So it's a bold claim given what we know. It's also clear that next generation hardware will be safer regardless of if the previous generation "works."


AI algorithms are split into training and inference.

Training is the process of generating the parameters for the neural network, it usually takes significant resources to train and is a very hard process.

Inference is the process of using a pre-trained neural network to perform predictions.

Tesla's car have hardware to perform inference. From working on the field for several years, I do believe that they have hardware capable of performing inference (driving a car).

However, the hardware that trains the neural network is comprised of hundreds, if not thousands or tens of thousands of GPUs, that hardware lives in Tesla's data centers (or some other cloud provider).

The really hard part of this project is on training, not inference. So while there is some risk that the car (inference) hardware might require an upgrade, this seems very unlikely to me.


Thanks? I'm a ML researcher. I thought the comment about ViTs would have been a dead giveaway. Is this nerd-splaining? Because it is really demeaning and you're missing the entire point.

While you're right, inference is not training but accuracy and inference speed is still important and hardware dependent. YOLO still can be faster and more accurate. Everything I said still holds true. You want to do inference on larger images because larger images have more information. More information helps you get better accuracy. You want bigger networks because bigger networks correlate strongly with higher accuracy. If you read any object detection paper you will see different sized models showing these tradeoffs. You want faster hardware because you still want your algorithms to be fast. Inference is still done in the car. Better hardware still makes all that perform better.

> The really hard part of this project is on training, not inference.

If you pay attention to the YOLO series or other real time object detection you will know that inference is still something being worked on. Inference speed matters. Throughput (different metric) matters. Hardware helps with both even if we use the same pretrained network. A network will have faster inference and higher throughput on an Ampere card than on a Kepler. So I'm not sure what you're debating here, it really feels like you're just trying to tell me you know machine learning but missed strong clues that you're talking to someone else who is aware.

TLDR: hardware still matters.


So much of the discussion seems focused on images recognition; how much of the full autonomous solution is the perception problem vs other elements of driving?

From the outside I’ve wondered if the missed schedule targets around autonomous driving is because we’ve conflated “solving” the perception problem with solving self-driving. Just curious what your take is on that


> I do believe that they have hardware capable of performing inference (driving a car).

Since it has never been done we still don't know what's needed, so even if you worked in the field that's only intuition, a bit light for big commercial claims IMHO.


You have no idea whether that's sufficient hardware for full self driving. Nobody does, because we don't know how to make it happen. We can't even prove that it's doable short of AGI.

So what's wrong with the statement is it fits a pattern of statements with reckless disregard for the truth.


Until it actually works, how can you say the hardware is sufficient?


Historically, there have been significant bottlenecks on algorithms, I'd argue software plays a bigger role than hardware; especially during inference.

Going back to AlexNet, while GPUs helped the training process, it included significant software improvements that made a huge difference: training with dropout, convolutional neural networks, using rectified linear units, etc.

Also, training hardware is much more relevant than hardware for inference. Tesla's cars do not include hardware to train the neural network but just for inference. The training hardware is actually a server cluster that Tesla owns. So if hardware is insufficient, Tesla might have to buy a larger training cluster, but not necessarily upgrade the inference hardware.

That say, I do agree that there is some risk in requiring a hardware upgrade for inference, but to me that seems very unlikely.


so, you're saying it doesn't work then?


Correct! I'm saying FSD only works on 90%-99% of cases, which is currently insufficient to be actually considered FSD. Keyword, currently.

However, I'm NOT stating that this won't ever work. I do believe that FSD will work in the future. Are there risks? For sure. But I'm very optimistic.

I'm trying to communicate that, even with current in-car hardware, this should be sufficient. Why? Cause driving the car with a neural-network is actually quite trivial. Now, training the neural-model is actually REALLY hard. But the training does not happen in the car, car just collects data and sends it to a data center with GPUs that trains the network. If any hardware will need upgrade, is the data center were training happens, not the actual in-car hardware.

Are you really saying that FSD will never work? Cause that's a pretty bold claim.


The reasonable expectation is that cars will not be able to achieve actual self-driving without either AGI or more specialized hardware than a camera. AGI is almost certainly a 'next century' kind of technology. Adding sensors other than cameras (lidar, others) to a Tesla means that they don't have the hardware.

This claim is quite obviously true, because despite the claims of Tesla's people, we don't know of any vision system that works without relying on general intelligence. Human/animal vision in particular is only accurate because we inherently use our knowledge of the world and physical intuition to resolve ambiguity. Put a human (or other mammal or a bird) in a fully artifical environment, without recognizable objects and without shadows, and you'll see that they are about as bad at vision and navigation tasks as many current algorithms. The effect will be even worse if you recreate realistic objects but with unrealistic proportions and behaviors.

Since we can't hope that computers will gain animal level intelligence in understanding the world (no one is even really working on the necessary problems), the only path to self driving is through more advanced sensors, as everyone but Tesla has realized for a long time. This is why Waymo has self-driving cars on the road today (in perfect ideal conditions), and Teslas can sometimes barely navigate an empty intersection without a human driver intervening.


> I'm saying FSD only works on 90%-99% of cases

This is an extremely bold claim. You're also ignoring the fact that driving is an extremely heavy tailed distribution. We know that both processing heavy tailed distributions is difficult (not training, but pre-processing) as well as we see a strong correlation between larger networks and performance on heavy tailed distributions (more normal like distributions can get away with substantially smaller networks). But larger networks means more MACs. More MACs means slower inference. There's still a tradeoff between accuracy and speed. In driving you need both.

> Are you really saying that FSD will never work?

No one here is saying that. Everyone here is calling you out on your BS that we _know_ that FSD will work on _current_ hardware. We don't _know_ that. Maybe it will, but it's not a pony I'm willing to put money on.


I'd add, we can be more specific about what a "case" is. I'd say, a trip might describe this better. As in, 90-99% of trips FSD Beta 9 will take you there without interventions. That's still lame, but still quite impressive. Achieving 90-99% accuracy on trips still seemed impossible one year ago, I don't think that's the case anymore. The remaining risk is, how fast can this improve between 99% to 99.9% and then to 99.99%, etc.


“I’ve cured cancer, just got to finish the research. Am actively looking for investors. This is your chance to get in on the ground floor.”




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: