Some time ago (around ~10 years) this guy (the presenter) was internet famous for being a Rubik cube speed solver and making tutorials and videos about that: https://www.youtube.com/watch?v=609nhVzg-5Q
What a blast from the past! I learned how to solve the Rubik's Cube blindfolded by watching him, back in the day. His tutorials are perfect and I've probably recommended his channel to ~50 people myself. Crazy he only has 36.6k subs.
Glad to see he's doing well.
It's as if everybody wants to be the one to exclusively own the tech. Imagine every car manufacturer having a completely different take on what a car should be like from a safety perspective. We have standards bodies for a reason and given the fact that there are plenty of lives at stake here maybe for once the monetary angle should get a back-seat (pun intended) to safety and a joint effort is called for. That would also stop people dying because operators of unsafe software are trying to make up for their late entry by 'moving fast and breaking things' where in this case the things are pedestrians, cyclists and other traffic participants who have no share in the monetary gain.
It would probably slow down. 9 women can't have a baby in 1 month. Besides that, the disagreements about approach, politics, or eventual competitive interests would probably bring things to a halt for a long time.
I don't think the solutions to this problem are resource-constrained. Many companies would happily find more resources in order to be first to market with this technology.
But just like the Melanesians with their coconut-shell headsets, there won't be anyone listening on the other end...
HBP should have tried a simpler task first, e.g. replicate a fruit fly’s brain.
Replicate the rest of the fruit fly so that you can install the brain in it? And then replicate the world so that your software-simulated fruit fly has a natural context in which to operate?
Or just stimulate the brain with random inputs not associated with any real-world stimulus?
This already strains our technical capabilities (at least for the amount of money we are willing to spend on it).
For any animal whose behavior is the product of a lot of learning, and who deals with other such animals in daily life, you have to solve all the problems you had to solve for the fly and also deal with a much larger connectome and also deal with the fact that, until you can passably simulate mouse, you will never be able to simulate the way a mouse learns to behave in the presence of another mouse.
I don't think that's true. As soon as your simulated fly's behavior diverges from the actual fly's, all of the recorded input after that point is invalid/useless because it will not match the simulated fly's position/orientation/whatever.
Also, how many dollar of investment and years into the future do you think we are from being able to record all of a fly's neural input while it is moving freely?
Having a detailed simulation like this would greatly accelerate Numenta's style research, where instead of piecing together information from published papers, you would get it straight from the experiments you control.
That would be a bad idea because like in evolutionary processes you need this diversity of ideas to locate better local optima even if it will take longer.
Of course, it's not a binary choice. Things like data should probably be pooled but the use of data in tech should compete.
So actually, I think it's one of these situations where having a lot of independent efforts might be worth it.
In the self-driving world, the duplication is necessary - different companies are taking different directions, and nobody really knows which will work out.
In the ML hardware world, the duplication is mostly unnecessary. People are developing their own inference hardware ASIC's because they're relatively simple (compared to designing a CPU from scratch, designing a TPU is pretty simple because there are so few operations, and no complex out-of-order execution), and you can't buy one off the shelf yet.
As soon as ML hardware becomes available to buy off the shelf without a massive price premium, everyone will switch to that.
So which one design you suggest we all use for all our ML needs?
Also lookup tables for a bunch of activation functions.
That basic design can efficiently implement nearly any neural net architecture as long as the layer sizes are at least 128x128 and fixed point is okay.
The other exotic designs you suggested are more academic research things, and not yet deployed at scale in anyone's datacenters.
What I've seen personally is what can be loosely termed "emergent consensus". Historical competitors (and often it gets whittled down to two giants, such as Boeing and Airbus) will work in secret on research. But after years of experimentation arrive at very similar outcomes. An optimal answer that could only be arrived at through constant trial and error, product evolution and iteration.
Regarding Karpathy's PyTorch presentation I don't thing anything that wasn't already public was revealed. The FSD board with custom NPUs is a Work of Art. I like that there are dual redundant streams. And the scale of the dataset is already well know: 4096 HD-images per step!
If I had to speculate, the "Dojo Cluster" may be envisioned as an effort to share data and compute with industry partners as a cloud SaaS product and ancillary revenue stream. But that is pure speculation ;)
Inside Tesla’s Neural Processor In The FSD Chip
Of course, papers and data aren't code. But I think a lot more is being shared than people realize.
Right now, the competition in the self-driving area is metaphorically as close to do-or-die as you can have in peace time. GM as a regular car manufacturer is toast, Cruze is pretty much their only hope. Uber is bleeding, if they pull self-driving off, they are kings. The German manufacturers are watching in disbelief as Tesla is starting to eat their pie. Conversely, Tesla knows that in what they are doing (electric cars) they don't enjoy any fundamental moat. If the Germans get their act together, they'll be able to make equally performant electric cars, but true luxury. The only one that's not really about survival is Waymo.
Maybe the companies actually building the nuclear plants and trains and rails were actually in competition ?
Once that kind of secrecy was gone our whole technical progress was accelerated, because people could build on the discoveries of other people.
Right now we are going back to the alchemist model in some ways (the highest profile people work for the big companies and don’t share their discoveries). This makes progress slower.
I have to strongly disagree with this for the specific case of AI/ML. The big company labs are publishing open access papers non-stop, often with code and sometimes even datasets. They're more open than some areas of academia, in fact.
Interesting and certainly compelling, but not earth-shaking like style transfer has been.
Easier said than done, but I think it would strike the right balance between reducing duplicate work, and incentivising progress.
Which is a sign they were pretty worried about choosing to put all their smart guys on one path, and picking the wrong one. And the number of smart guys they had was certainly a real constraint. Even under their very stressed circumstances, exploration and competition were important.
Also self driving is not a problem where you can put a bunch of geniuses in a room and have them calculate the correct design. There are too many unknowns. It needs experimentation and trial and error.
Your logic of "if there were 10 of them simultaneously, it wouldn't have worked out" is flawed and self serving in that if you ignore all competition and just divide a single competitor into any number of smaller entities, of course at some point they'll be too small to be viable.
Replacing competition with cooperation may have accelerated the positive aspects of progress (e.g. science), and maybe slowed or prevented many of the negative ones.
Here's a treatise on the topic, if you're interested in this counterpoint: https://www.alfiekohn.org/contest/
Roads are also governed by public bodies. Road signs are standardized and public.
I think the government should take a much larger role in defining self driving cars. For example, rather than using computer vision to recognize signs, signs could be active standardized beacons; instead of having to recognize lanes, they could be repainted with rfid chips that are trivial for cars to recognize and follow.
Avoiding driving into people is also something that was somewhat regulated by crosswalks with pedestrian lights. Would it be absurd for the crosswalk to know roughly how many people are at it, then broadcast this to the car, rather than having the car have to recognize them?
There are many things the government could do with transportation infrastructure that would benefit everyone, many of which are literally impossible for companies to do separately. Can you imagine if we had to wait until IBM (or Siemens or Google or Apple) got into the business of launching satellites before we got GPS? There is a good chance that to this day cell phones wouldn't know their location or give anyone any mapping applications.
To me, self driving cars are similar. Many parts of transportation are a public good.
Yes, completely absurd. For one, many places are too sparse, poor, or unstandardized for this to be remotely economical. Two, people often don't cross at crosswalks (jaywalking). Level 5 AV is a thing where 99% coverage isn't good enough. You need a lot of nines.
That's why many think lidar systems are a crutch. If lidar can't work in snow or heavy rain (at least 1-2% of days in the north), then you need fallback which must still be >99% as effective to avoid incidents. But then why not just use the fallback?
Generating data? Sure. But for actual use in the data pipeline? Makes you rely on an ultimately untenable solution.
Anyway it's just one example. RFID or similar embedded beacons in the road paint would make roads much easier for cars to follow. They are expensive for any one company to do but cheap for the government to do if it is done everywhere at once and lasts 5-10 years. Road signs that broadcast what they are (instead of needing to be "seen" and interpreted visually instead of through radio by the car), are similar.
Finally, a government standard could coordinate cars into a caravan, avoiding pileups for example, and giving the participating cars several advantages you can find by Googling "car caravan". (Though this might be achieved by industry.)
I can see the case for doing their own edge hardware for the cars (barely), but I really don't think doing training hardware will pay off for them. If they're serious about it, they should spin it out as a separate business to spread the development cost over a larger customer base.
Also, I'm really curious whether the custom hardware in the cars is benefiting them at all yet. Every feature they've released so far works fine on the previous generation hardware with 1/10 the compute power. At some point won't they need to start training radically larger networks to take advantage of all that untapped compute power?
It's not surprising that they also build the hardware for training. Correct me if I'm wrong, but Google use the same TPUs for training and inference, because the underlying operations are the same : multiply then add numbers. Once Tesla built the hardware for inferring, the design of the hardware for training is probably similar.
Unlike Google's TPUs, Tesla have a specific use case for the hardware (computer vision for automotive), and maybe than means they can further optimize the computation pipeline with their own specialized hardware.
- It's under 100W so they can retrofit into old cars
- lower part cost, so they can do full redundancy with doubling the parts
- they estimated that 50 TOPS is needed for self-driving
- lower latency with batch size of 1 compared to TPU's 256
+ GPU for post-processing
- security: only code signed by Tesla can run on the chip
- at the time (2016) there was no neural net accelerator chips
- some part's are built from bought IP (so not reinventing them) Probably things like the 12 ARM CPUs, LP DDR4 memory, video encoder, maybe the separate post-processing GPU too...
- physical size of the board is small
- performance example: on CPU 1.5 FPS, on GPU (600 gflop) 17 FPS, on Tesla's NN accelerator 2100 FPS
- Besides the convolution even the ReLU and Pooling is implemented in hardware
- Paying attention to the energy efficiency down to the arithmetic and data type usage.
- The silicon cost is less than their previous hardware (HW 2.5)
- old hardware 110 FPS new one 2300 FPS
- 144 TOPS compared to NVidia's Drive Xavier 21 TOPS
In other words, Cloud TPUs could be the same architecture than Edge TPUs but scaled to an higher frequency and more packed.
I guess we need sources to confirm.
Also, training requires a lot more RAM per unit of compute, since it needs to store all past layer activations, whereas for inference, that is unnecessary.
As far as I know, no player who has developed dedicated ML hardware (as opposed to using GPU's) uses the same hardware for both inference and training.
I would expect their training hardware to be something specifically aimed at optimizing memory bandwidth to support distributing training of their “shared” hydra feature. It’s interesting that the shared hydra feature extractor is able to converge as they keep adding more and more output predictions under a training regime of interleaving asynchronous updates to the model from different predictor networks ...
Seems to me the formula they are pursuing with custom hardware might be to support a strategy of
1. keep adding more predictions based on same feature
2. Increase the span of time represented by batches used to train the recurrent networks
Both pursuits seem very data efficient in terms of the amount of training data they could conceivably collect per unit time of observation ...
Custom hardware with a problem specific memory architecture aimed at efficiently supporting training with very large rnn time slices could be developed that’s more about “make it possible to train this proposed model at all” rather than “make it faster/cheaper to train existing common model architectures”. When custom hardware is required to make it possible to train the model they want, the validity of the hardware development cost bet might end up being more about the effectiveness of the model they think they want than it is about maintaining general purpose performance parity vs any off the shelf hardware options ...
Not agreeing or disagreeing with their decisions, but if you have the resources, you can certainly design a custom chip that performs a specific type of task very well that beats other competitors. Nvidia's GPUs are have to be reasonably good at training across different NNs. You could have a chip that's exceptional good at training one/two specific types of tasks.
For most companies, this would be a bad idea. However, Tesla knows how to produce hardware.
Not sure if it’s still the case today, but previously Tesla’s training was done on-prem and with their own in-house Tensorflow clone.
And yes, if you get TPUs from GCloud, you are likely to be working with their engineers to get things working. Those engineers tend not to have much business conflict of interest, though. They want to help you because your problems are likely more interesting than what they’d otherwise be assigned.
By hardware-hours, Tesla is hardly one of the top companies training deep networks. Planet Labs (satellite imaging), Netflix, Pornhub, to name a few.
What's the info they'd leak to the Google consultants? How much data or TPUs they're using? This is practically public information.
Here's another article on the subject:
Characteristics that make content successful.. here again, it's mostly decent quality content plus marketing. I doubt the scripts are reviewed by NNs.
I have no idea about the network and delivery stuff, but I guess a well designed network can take care of most of it.
My strong impression is that, similar to other well known cases (Uber, WeWork) Netflix is a mostly traditional company (a media company) that very strongly wants to be seen as a tech company.
Its a bit odd that legally one can rent out physical disks, but there is no corresponding way to legally get permission to rent out streaming content without negotiating with the rightsholder. But thats how it is...
It's hard to think of a good fair regime to do this under.
At least with a physical disk, there's a maximum reasonable rate that you can turn the disk around between users and you need to have enough copies for whatever the lifecycle peak demand is.
We do have an audio compulsory licensing system for things that are purely songs. But with video works, there's not a clear boundary for "how big" the work is-- how do you treat 30 hour anime series vs a 5 minute Pixar short? How do you treat continuing medical education videos vs. fluff amateur made content? Etc.
Also, Google TPU TOS prohibits the use of TPUs for stuff that competes with Google (and I'm assuming with other companies under Alphabet umbrella), at Google's sole determination. Not that it would be a good idea to upload Tesla's proprietary data into Google Cloud even if it did not. Cloud, after all, is just somebody else's computer.
I don't think this is true. If you're talking about https://news.ycombinator.com/item?id=19855099, it doesn't apply to TPU hardware, as is explained in the comments there.
> TPU-like stuff is ~10x the energy efficiency of GPUs
10x is probably overstating it when talking about newer GPUs because they have ML hardware in them now. Also, that still doesn't make it a good idea to build your own chips because there will soon be many third party options to choose from. Doing your own chips is a bet that you will out execute dozens of companies ranging from startups to industry giants. Simply taking your pick of the best commercially available options is likely to be a better choice in the near future.
As for single-source, the models are written in PyTorch, not TPU machine code, and they're pretty standard models anyway (e.g. Resnet-50). They can easily transfer to other hardware if necessary. There's not a ton of lock-in there. It doesn't justify the massive costs of ASIC development just to avoid this nonexistent lock-in and imaginary TOS clause.
GPU advantage: more refined ecosystem and you can buy them for $<1000 or get laptops with them built in, and if NVDA has sweat more software engineering blood and tears than GOOG into your model's functions, it will run better on them
TPU advantage: Colab has a free tier that lets you play with them at no charge and if GOOG has sweat more software engineering blood and tears into your model's functions, it will run better on them.
All IMO of course. And deep down it can get more complicated than that, but I salute GOOG for being the first company to ship competitive AI HW, doubly so at scale.
TPU? Seems like it has a lot of potential, but not for people directly competing with them.
Waymo's Laserbear lidar? Seems like it has a lot of potential, but not for AV companies directly competing with them.
Google's playing this game pretty fiercely... which given their size is pretty bad/daunting.
could you share the stats on this? Google told me to use a K-80 for training.
Can it be true? Then again, Apple's app store behaviour seems to suggest such demands are tolerated. Antitrust is really asleep in the US, isn't it.
Also, vendor lock-in is a huge challenge in the cloud space. I don’t think Tesla would be comfortable with the fact that all their training data sits on a potential competitor’s datacenter.
Also, Tesla is/was planning on running these chips even while not in autonomous mode in order to spot new scenarios which it can then record what the car sees and what the human does in order to collect a lot more unique road scenarios to train their models with.
 Credit Robotbeat: https://news.ycombinator.com/item?id=19729743
Increasing their compute per watt allows them to get more compute per watt, so they will pursue it lots, low speed energy savings are a way of making the range extension seem like a big deal when it isn't (maintaining highway speeds requires tens of kilowatts).
Power efficiency does make a range difference and is worth seeking, but Tesla is exaggerating this, IMO, to conceal one of the ways that the churn-heavy cycle for FSD has imposed organizational costs (an entire chip-level hardware program, in this case).
An actual working system might slow them down, but even then, better hardware is one of the ways to make it better.
There's commodity computing of comparable throughput that other automakers can incorporate without too much cost, but it doesn't fit into the existing TM3 vehicles because it has a larger footprint in power, space, and cooling than HW2.
Here, Tesla spends a bunch of money to develop HW3 and overcome this problem, but the advantage in a new car design between HW3's efficiency and commodity hardware is limited.
The latest OTA finally brings a hardware v3 only feature, traffic cone visualization, and traffic cone automatic lane change.
Looks like they are really nicely orchestrating workloads and training on numerous nets asynchronously.
As a person in the AV industry I think Tesla's ability to control the entire stack is great for Tesla... maybe not for everyone who can't afford/doesn't have a Tesla.
Affordability is not as much of an issue as some make it out to be. Cost-wise it's like owning a Camry or an Accord, if you go for the lower end models. If you mean not everyone can afford a new car, then sure I agree with you.
Edit: if you think I'm wrong about this, please explain or ask me to clarify anything?
Similarly, the infrastructure around electric charging stations I believe hasn't fully matured yet and as a result many people who've already owned a car, I believe will stick with gas cars since there's no huge incentive to change, unless it becomes easier to charge (faster, more convenient).
Do note that I don't have a drivers license. I never intend on getting one (I believe in what I do in the AV industry). I'm just guessing on the habits of people, not that I have any real experience in buying a gas/electric car.
Also note I didn't think you were wrong, not sure why the downvotes.
That's interesting that you relate something you do in the AV industry to not ever getting a car... what's that about?
I do think that it's possible in the future the majority of people will never need to own a car.
Regardless, I think Ghost Locomotion and Comma.ai have a lot of potential for what they're doing now. I think they'll coincide with fully driverless cars like Cruise, Waymo, or Aurora. Regardless of if they're electric or not (I think electric cars will be more heavily adapted if we use Cars as a Service)
Also, yes, I don't ever intend of getting a license. This problem is really fascinating and I'm excited to play a little part of it. By not having a license, I can spend more time having the perspective of a person in 20-30 years when drivers licenses will become less common and utilize it in my work.
This is awesome. I've always found that only by really experiencing the future (or some part of the leading edge that is soon going to become the future for most people) you gain much more understanding of it, ahead of others, and can apply it in your life planning. It would be worth living this way even if only temporarily just to get that as you say perspective... Not ready to give up driving forever, heh, but it I might have to try a short sample of that non-driving lifestyle sometime.
The smart solution would be to consider a map a probabilistic thing, which neural networks are really good at handling.
LIDAR's are cheap, but not cheap enough yet to not seriously affect the bottom line if you put them into every car. It also will kill the resale price of cars without it, which in turn hurts the companies image and stock price.
It all depends what you see are the chances of the autonomous taxi market developing within the lifetime of these cars.
>“Lidar is a fool’s errand,” Elon Musk said. “Anyone relying on lidar is doomed. Doomed! [They are] expensive sensors that are unnecessary. It’s like having a whole bunch of expensive appendices. Like, one appendix is bad, well now you have a whole bunch of them, it’s ridiculous, you’ll see.” https://techcrunch.com/2019/04/22/anyone-relying-on-lidar-is...
To me it seems we're still in really early days.
And later in the video they show 3D reconstruction from cameras and saying they use it in the car.
Watching the full talk is recommended if you have the time (talk starts around 1:10:00 in the video)
He describes the sub-graph training in the context that they they have all the predictors in one big model, and with control of the network can feedforward and train sub-graph (read sub-parts) of the model.
They talk about the concept of "modular network". The article itself links to the Wikipedia page : https://en.wikipedia.org/wiki/Modular_neural_network
Not sure it's exactly the same idea, but it looks similar.
- TensorFlow is great at deployment, but not the easiest to code. PyTorch isn't frequently used in production until recently.
- If you have the resources for great AI engineers and researchers, your team will be good enough to build and deploy both frameworks.
- Preference toward the easier framework your tech leads prefer.
- Lots of new academic research is coming in PyTorch
- TensorFlow is undergoing a massive change from 1.1x to 2.0; if you choose TensorFlow, write on 1.1x just to then refactor to TF 2.0? Or write on TF 2.0 now and deal with all new edge cases? Or write in PyTorch (easier) but handle the more difficult deployment process.
- ML code quickly rots. Bad PyTorch code is just bad Python code. Bad TensorFlow code can be a nightmare to debug.
- PyTorch's eager execution makes coding NNs much easier to prototype and build.
He is an excellent presenter who really has a passion for teaching.
Im not really involved with the industry, so I cant really speak to how he holds up to other experts. However he is by far the most digestable resource I have found for learning about NN and science behind them.
If you are just discovering him now, google his name and just start reading. His work is truly binge worthy in the most meaningful way.
Lidar helps you spot obstructions, but won’t tell you what they are and won’t help you figure out what to do to avoid them.
Want an example? Cruise’s first real world demo got stuck behind a simple taco truck in downtown SF.
Tesla is already using a model to rebuild a 3D space from Camera data only, the parent suggests to improve the quality of the transformation with high quality 3D representations from Lidar.
It's deep learning all the way down.
Your comment just reeks of anger and hostility.
It seems like you'd rather Tesla didn't try at all, and instead we all just give up and go back to the status quo.
Waymo had a 20 year advantage, but Google lost many key people there in the meantime as Larry Page didn't want to launch partial self driving.
I think both approaches are great and I wouldn't want to choose between the 2, just be a happy user of the end result of the competition.
any source for that?
That's the responsible thing to do.
But at the same time people understand that on a highway Tesla Autopilot is safe enough to be used on a long boring road, and dricers generally feel less tired (and can focus more on the harder parts of the road).
In all seriousness, I like Karpathy and his work. Seems like a good guy. Not sure why you'd want to work to enrich a guy like Elon Musk though.
If you look at the alternative (Dieselgate), which is killing millions of people every year thanks to air pollution (sadly I'm highly affected by it, trying to not go near any polluted area is what my life is about at an age of 37), I'm happy for Elon Musk and I hope Tesla's mission will succeed as soon as it is possible.
How can we rely on the output of eight cameras? This is not a kid's science project.
It's all fancy neural networks until someone dies. Pretty callous and Silicon valley-mindset for such an important and critical function of the car.
Will never buy a Tesla after having seen this.
It's also mind boggling to think we currently trust organic tissue to do this crap, some of which is bathed in psychoactive chemicals.
And yet we do, and as a result, horrendous catastrophes occur every minute of every day.
> It's all fancy neural networks until someone dies
No, that can't be the standard, not when people are dying right now in the current regime.
Unless the new regime kills and maims at a higher rate than the current regime, there is no reason for fear.
The perception will be different however, since in the public eye the drunk driver “had it coming” and the only real loss is the other driver/passengers/pedestrians. If a self-driving car kills its own occupants and the occupants of another car then there are more “innocents” than in the drunk-drive scenario.
Current averages for the US are around 10 per billion miles and decreasing; best countries in Europe are already below 5 per billion miles.
There's some really interesting progress both on vehicles monitoring driver attention, and on monitoring for alcohol in the air, that would yield substantial improvements even beyond 2019 tech. I have no doubt we'll see fatality rates below 1 per billion miles in Western Europe within a decade.
The corollary of the statistics quoted above is that you need to observe your self-driving vehicle system over tens of billions of miles before you even know if you're safer or not.
What I'm saying is, the averages today are across all vehicles on the road, some of which are old and have fairly low survival probability in an accident. And the deployment of technology to handle/avoid distracted driving is still in its infancy. If we just wait 10-15 years, I believe the safety rates for humans will increase by 10x over current US statistics.
Then self-driving has to manage another 10x improvement over that, in order to be worth it.
The appropriate standard for self-driving cars is a sober professional driver, not the average idiot on Saturday night. It isn't unusual for a driver to go a million miles without an accident, when that's their job.
Numbers are numbers.
I've yet to see a computer just stay up that long, never mind actually doing anything the whole time.
(If you use the collision rate instead, the numbers are about 500 times worse - but that's still quite a lot of 9's.)
People have this fixed thought that humans are terrible drives. They are not! Computers have their work cut out for them to exceed the reliability record humans have set.
Silly example of why: humans are zero nines of reliability if you talk about crashes per parsec.
But that's not what I measured. I measured safe minutes of driving as a ratio to unsafe minutes. Which is a unitless number.
For 7 9's I assumed a crash had a 5 minute lead-in of unsafe driving before the actual crash, and that average driving speed was 30 mph.
If you assume the bad driving is 30 seconds (for example the accident in Tuscon the Uber car saw the pedestrian around 30 seconds before crashing), then you can add another 9, making both figures 8 9's of reliability.
FWIW, I think deaths, regardless of fault, is probably the best number for an apples-to-apples comparison. If the Uber car were driven by a person without a dash-cam, then there is no way they would have been ruled at fault.
Similarly, I have been involved in 5 collisions (most were not my fault). Of the 5, 3 were reported to insurance and only 1 was recorded by the police (For 1 other the police were called, but the dispatcher (on the non-emergency police line) said "Are both cars driveable? (yes) Is anyone injured? (No), then don't bother us!" (By comparison, in California even a single car collision on private property must, in theory, be reported to the police).
Deaths, at least, will get recorded by the CDC, if nobody else.
However, I don't think that deaths are necessarily the best number for considering "failures" all deaths are in some way a failure, but most failures do not result in deaths. Nobody even had to see a doctor for the 5 collisions I mention. I've fallen asleep at the wheel and crossed the center-line without getting in an accident, &c. humans make a lot of mistakes, but most of them do not result in a collision, and most of the ones that result in a collision do not result in a death.
I also think it's inevitable that computers will become better at driving than not just the mean or median driver, but 90th percentile or more, at which point it would be immoral to not, in some way, encourage self-driving cars over human-driven cars. My guess is that we are less than a decade away from that point, but I could be wrong.
Certainly the lack of a safety-culture in the automotive field (as compared to say, avionics) is a huge barrier to be overcome, and that's a social hurdle, not a technical one and I can only make wild-ass guesses as to when that will change.
You seem to be implying that cab drivers are the only people who ride in cabs.
However the reason are not eight cameras. You should be able to drive fine with just one camera (thought experiment: could you drive a car 1000 miles from you, just by seeing what the driver of that car would see, no extra cameras, sensors or lidars?).
And that took many decades not even years and in a very constrained problem space.
Releasing a "self driving" metal torpedo even if it's a limited Summons should at least be regulated if not outright banned.
I feel like even a single accident let alone death should put these idiots behind bars.
Please explain, in detail, what your specific objections are, and how you are more qualified on this subject matter than the presenter.
It's often reactionary - people wait for someone to die and then suddenly you have nervous Musk in front of Congress and such crap. Why wait? Why can't these be regulated before allowing on roads?
And no, failover control is not acceptable given the past incidents and deaths.
Today thousands are dying every year on the road, are we just forbidding cars entirely?
> And no, failover control is not acceptable given the past incidents and deaths.
How many incidents/deaths per miles driven? How does it compare to all the other transportation systems?
Professional drivers can go for a million miles without an accident, and I don't believe anyone's autonomous driving software can get within an order of magnitude of that without a disengagement, even in favorable conditions.
You may disagree that a disengagement is equivalent to a human having an accident, but I strongly feel that it is. In either case, you have a situation where the driver reached a point where it was definitively unable to determine an appropriate next action.
“Perhaps that’s because, as it turns out, Teslas on Autopilot could have a higher fatal accident rate than those driven entirely by humans.”
They still have backup drivers mostly though.
Tesla makes this very clear.
It's also all over the Internet.
But there will always be people who do not get this.
Those people should not drive Teslas, or pretty much any modern car for that matter.
If you are under the impression that you would be relying on this system to drive the car, then I agree you should not get a Tesla.
Of course full self driving is coming at some point, but that's a conversation for another day. Meantime Tesla is making steps toward it very incrementally with things like the "stop mode" rolling out right now.
- the driver is still present and will most of the time intervene and save his life(there are some youtube videos where the Tesla AI was trying to kill the driver and those incidents don't appear as accidents int the stats)
- "autopilot" is engaged mostly on highways , but the statistics are not accounting for this
- they are also comparing the safety of a new car versus the median of all cars(old and new, cheap or expensive) on all demographics(how many teens own a Tesla?)
I would like that Tesla make all the data open or have an independent group analyze it, including the disengagements, I wonder how isolated are the cases where the car is driving you into trucks or stone wall but the drivers intervened and saved themselves and also saved Tesla from a bad statistic.
Autopilot has the luxury of being able to / requiring disengagement in less-than-favorable driving conditions. Humans don't.
A fair comparison would be comparing against the miles/conditions/roads driven.
Watching the AI visualization of summons in-action was horrifying, and made clear why many have reported summons mode as resembling a drunk person navigating a parking lot.
I also think the way Tesla is going about this is utterly idiotic and reckless. It upsets me that I'm sharing roads with vehicles having these dysfunctional immature systems.
Point this crap at video game engines and don't let it anywhere near real people until it can drive millions of virtual miles in something like GTA without hitting anyone/anything and without behaving like a drunk driver who lost their glasses.
If bad human drivers killed only themselves, that would be one thing, plenty of Darwin awards to go around. But they don't. I do value reducing traffic fatalities to near zero. I think that would be fantastic.
The GTA idea is a good one. I think they've done that, many times over, but I could be wrong.
They way Tesla is doing it is interesting. Have human supervision, so even though the system may behave as you say like a drunk driver who lost their glasses, at least the human is there keeping it in check.
Humans aren't perfect, but I've found (and statistics have shown) that the system plus a human is better than a human alone. That alone should make you less upset. These cars are less reckless than cars driven by humans.
Human supervision is just one element of the very smart way Tesla is going about this. The other element is incremental changes. A lot can be said about that but I won't now. Teslas are getting small and big safety improvements that roll out over time as full self driving gets closer and closer. This should make you feel better.
If you see a Tesla in autopilot, it won't be tailgating. If you see people tailgating in a Tesla, you can be sure autopilot is not turned on. Even with the follow distance set to the lowest, the distance kept is larger than what is done by most humans. This should also make you less upset.
When driving in stop and go traffic with Teslas with autopilot on, if the owner sets the follow distance to say the middle range, you'll find you are often able to merge in in front of them, because they allow space for that (and for safety). This should help too.
Over time, the system is getting better and better. Eventually, it will all be ok!