Hacker News new | past | comments | ask | show | jobs | submit login

Tesla covering 3/6... stock price is definitely still low





I would never trust any self driving car that didn't use LiDAR. It's an essential sensor for helping to fix issues like this:

https://www.youtube.com/watch?v=1cSw4fXYqWI&feature=emb_logo

And it's not contrived since we've seen situations of Telsa Autopilot behaving weirdly when it sees people on the side of billboards, trucks etc.


LIDAR vs camera is a red herring. The fact that Elon and his fan club fixate on this shows you how little they understand about self driving. The fundamental problem is that there is no technology that can provide the level of reasoning that is necessary for self driving.

Andrej Karpathy's most recent presentation showed how his team trained a custom detector for stop signs with an "Except right turn" text underneath them [0]. How are they going to scale that to a system that understands any text sign in any human language? The answer is that they're not even trying, which tells you that Tesla is not building a self-driving system.

[0] https://youtu.be/hx7BXih7zx8?t=753


A surprising number of human drivers would also not be able to 'detect' that 'except right turn' sign. Only 3 states offer driver's license exams in only English and California for example offers the exam in 32 different languages.

Even so, it is quite possible to train for this in general. Some human drivers will notice the sign and will override autopilot when it attempts to stop, this triggers a training data upload to Tesla. Even if the neural net does not 'understand' the words on the sign, it will learn that a stop is not necessary when that sign is present in conjunction with a stop sign.


They have hired most of the industry talents, so I think it's quite silly to state about how little they understand about this. In my opinion nobody except Tesla and Waymo has more knowledge of this field.

Why does it need to work in any human language? It isn't as if self driving cars need to work on Zulu road signs before they can be rolled out in California. I'd be surprised if they ever needed to train it on more than 4 languages per country they wanted to roll out to.

If I were driving I'd definitely stop for the person in the road projection at https://youtu.be/1cSw4fXYqWI?t=85

LiDAR also isn't a silver bullet. Similar attacks are possible such as simply shining a bright light at the sensor overwhelming the sensor as well as more advanced attacks such as spoofing an adversarial signal.


I don't think it's attacks we need to worry about (there's even an XKCD about dropping rocks off of overpasses). The issue is that without good depth and velocity data (so probably LiDAR) there are lots of fairly common situations that an ML algorithm is likely to have trouble making sense of.

I use autopilot every day. It stops for stoplights and stop signs now.

Sometimes when on the freeway behind a construction truck with flashing lights.

It is misleading. driving on the highway is by far the easiest part of self driving.

Going from 3 nines of safety to 7 nines is going to be the real challenge.


There aren't stoplights on the highway. I'm talking about in-city driving.

Humans don’t need LiDAR to recognize billboards

Self driving cars can't rapidly move their cameras in multiple spatial directions like humans do on a continuous basis.

Also we have a pattern and object detection computer behind our eyes that nothing on this planet even remotely comes close to.


People don't have eyes in the back of their heads. Self-driving cars don't get drunk or distracted by cell phones. Comparing humans with AVs is apples & oranges. The only meaningful comparison is in output metrics such as Accidents & Fatalities per mile driven. I'd be receptive to conditioning this metric on the weather... so long as the AV can detect adverse conditions and force a human to take control.

Chimps have us beat when it comes to short-term visual memory (Humans can't even come close).

Mantis shrimp have us beat when it comes to quickly detecting colors since they have twelve photoreceptors vs. our three.

Insects have us beat when it comes to anything in the UV spectrum (we're completely blind to it). Many insects also cannot move their eyes but are still have to use vision for collision detection and navigation.

Birds have us beat when it comes to visual acuity. Most of them also do not move their eyeballs in spacial directions like we do but still have excellent visual navigation skills.


Humans have visual processing which converts the signals from our three types of cones into tens to hundreds of millions of shades of color. Mantis shrimp don't have this processing. Mantis shrimp can only see 12 shades.

Human color detection is about six orders of magnitude greater than mantis shrimp's.


Right, but the theory is that they have us beat when it comes to speed since they are directly sensing the colors whereas we are doing a bunch of post-processing.

I think the point was that brains are the best pattern and object detection computers, not necessarily just human brains.

Also we have a pattern and object detection computer behind our eyes that nothing on this planet even remotely comes close to.

Not defending those who say that LIDAR isn't useful/important in self-driving cars, but this assertion is only marginally true today and won't be true at all for much longer. See https://arxiv.org/pdf/1706.06969 (2017), for instance.


Humans have about 2° field of sharp vision. Computers with wide angle lenses don't have to oscillate like the eyes do.

Humans are underrated.

On driving? I would posit that most humans are vastly overrated.

I suspect if you crunch the numbers, accidents are going to be above normal for a while after Covid-19 reopenings.

Anecdotally, I'm seeing people doing mind-blowingly stupid things on the roadways right now. It seems like people have forgotten how to drive. I suspect the issue is that people rely too much on other cars to cue them how to behave and the concentration is too low.

(It could also be that a constant accident rate cleans off the worst of the drivers with regularity as they get into accidents and then wind up out of circulation. I really hope that isn't why ... that would be really depressing.)


No they’re underrated. We all know the stats. Driving isn’t the safest activity. Having said that there’s a lot of wishful thinking that the current state of ML can do any better if we were to just put them on the roads today as-is.

You are right, for example, humans don't need anywhere near the amount of training data that AIs need.

I learned to drive a car when I was 13. My older cousin took me to warped tour, got hammered and told me I had to drive home. I didn’t know what a clutch was, let alone a stick shift. After stalling in the parking lot a couple of times, I managed to drive us from Long Beach all the way back to my parents house in Pasadena. Love to see an AI handle that cold start problem.

Cold start? You had 13 years!

Self-driving cars could work more like a hive mind. Humans can share ideas, but not reflexes and motor memory. So we practice individually, and we're great at recognizing moving stuff, but we never get very good at avoiding problems that rarely happen to us.

And we know we shouldn't drive tired or angry or intoxicated but obviously it still happens.


Exactly. The way to improve performance on a lot of AI problems is to get past the human tendency to individualistic AI, where every AI implementation has to deal with reality all on its own.

As soon as you get experience-sharing - culture, as humans call it, but updateable in real time as fast as data networks allow - you can build an AI mesh that is aware of local driving conditions and learns all the specific local "map" features it experiences. And then generalises from those.

So instead of point-and-hope rule inference you get local learning of global invariants, modified by specific local exceptions which change in real time.


It seems to me that humans require and get orders of magnitude more training data than any existing machine learning system. High "frame rate", high resolution, wide angle, stereo, HDR input with key details focused on in the moment by a mobile and curious agent, automatically processed by neural networks developed by millions of years of evolution, every waking second for years on end, with everything important labelled and explained by already-trained systems. No collection of images can come close.

Depends on how you quantify data a human processes from birth to adulthood.

You're forgetting the million years of evolution

But at the end of that video they state they were able to train a network to detect these phantom images. So this is something that can be fixed and had been proven to work. Only a matter of time before it's in commercial cars.

That same video said they trained a CNN to recognize phantoms using purely video feed and achieved a high accuracy with AUC ~ 0.99.

30%+ downvotes seems like there is not a consensus around this issue

I have a AP 2.5 Model 3. It will never be fully self driving. It still has trouble keeping lanes when the stripes are not simple. It still does phantom brakes



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: