Hacker News new | past | comments | ask | show | jobs | submit login
Apple engineer killed in Tesla crash had previously complained about autopilot (kqed.org)
604 points by jelliclesfarm 11 days ago | hide | past | web | favorite | 886 comments

I avoid new technology, exactly because I'm an engineer.

I wonder if that is just me, but when new technology is introduced and hyped, I usually take a quick look at implementations, research, and talks just to get an idea of what the state of the art really is like.

As a result, I have become the late adopter among my group of friends because the first iteration of any new technology usually just isn't worth the issues.

You know this effect when you open a new book and immediately spot a typo? I felt that way looking at state of the art AI vision papers.

The first paper would crash, despite me using the same GPU as the authors. Turns out they got incredibly lucky not to trigger a driver bug causing random calculation errors.

The second paper converted float to bool and then tried to use the gradient for training. That's just plain mathematically wrong, a step function doesn't have a non-zero gradient.

The third paper only used a 3x3 pixel neighborhood for learning long-distance moves. Doesn't work, I cannot learn about New York by waking around in my bathroom.

That gave me the gut feeling that most people doing the research were lacking the necessary mathematical background. AI is stochastical gradient descent optimization, after all.

Thanks to TensorFlow, it is nowadays easy to try out other people's AI. So I took some photos of my road and put them through the state of the art computer visible AIs trained with KITTY, a self-driving car dataset of German roads. All of them couldn't even track the wall of the house correctly.

So now I'm afraid to use anything self-driving ^_^

This whole self driving, fully autonomous thing has become some sort of strange faux-religion.

It's followers see no other possible solution as acceptable.

Instead of putting our heads together to make really good driver assistance technologies and being satisfied, the darn thing needs to also drive itself everywhere otherwise we're leaving something on the table! Untill we have zero deaths, we cannot stop demanding self driving - as if somehow AI is a magical solution that's always better than humans.

Forget better frame, chassis, and body panel design to protect pedestrians! We won't need them if AI never hits anyone.

Forget better braking systems that apply themselves automatically. We don't need that if AI can always avoid the need for sudden stops.

Forget seat belt enhancements since that'll just inhibit nap time in my self driving car.

No, forget it all. My car needs to drive me all by itself, everywhere I need to go, no matter how hard or impossible of a problem that is. Driver assist is boring... Level 5 is sexy, and that's what I want!

> This whole self driving, fully autonomous thing has become some sort of strange faux-religion.

What really bugs me, as someone working in consulting as a hands-on backend developer with a previous background in technical consulting at an accounting Big4 (student job), is the insane amount of PR, politics and marketing talk by people who have ABSOLUTELY NO IDEA what they are talking about. I witnessed politicians and C-level industry people talking out of their asses to ... idk ... drive stocks? Look smart in the face of Tesla? PR? No clue.

Self-proclaimed experts in magazines and talk shows raving about how AI is going to change everything. I had colleagues telling customers about the magnificent rise of AI and none of them could even spell "gradient descent". Backed by a law or accounting degree they KNOW that self-driving is just around the corner and they are very vocal about it while easily impressed by tightly controlled demos at some international tech fair. Everyone just seems to fall into the hype trap without a single brain-byte spent on researching the actual issues and what's most sad is actual engineers/technical people not doing their due diligence and informing themselves BUT THEN GOING OUT TO TELL THEIR NON-TECHNICAL FRIENDS ABOUT THE AUTONOMOUS FUTURE. Ugh.

I held a minor internal talk at work about self-driving cars for people generally with other backgrounds but light superficial interest due to "Tesla" and "the hype". They were surprised to see that we are likely decades away from actual Level 4 (not the marketing garbage that some companies put out) because even a slight change in weather can really fuck with all systems on the roads right now.

>...without a single brain-byte spent on ...

This is a glorious Gibson-esq turn of phrase that I hope goes down in history and is picked up in common vernacular.

Back on topic: The more I know about technology the less I want it or trust it to work. I assume this is similar to Pilots not wanting somebody else to fly, or surgeons not wanting to go under the knife.

We know, that we don't know enough about autonomous driving. Instead of the 'unwashed masses' saying "gee-whiz that is cool", we think "this isn't ready yet!"

We know that, but also we ought to know that pushing through the hard problems by rolling out systems operating world-wide and interacting with real people is the only way forward toward improved safety, reduced mental slavery to the menial task of driving, and the time and relationship freedom that comes with it. So yep, I let my Tesla do lots of driving. And I do it knowing that the brake pedal and the steering wheel are both manual controls that I can use to override the car at any time, no matter what. That is why I feel so comfortable.

And you know what? It's wonderful. It really is. I am free to look around and see other people on the freeway. They look so bored. So tired. So used up. Meanwhile my wife and I, we turn on the careoke machine and sing our way to the destination. It is absolutely the way to go. You can take my Tesla and its AP functions out of my cold dead hands. Maybe that's how I go, and I'm alright with it.

I wouldn't feel comfortable in a vehicle that may, or may not react to some situation on the road ahead. Because one day it will decide to do $something_really_stupid and you won't be able to react in time/correctly because you'll be too busy singing karaoke. With some extra bad luck, you'll die/kill someone else because of that.

Yeah, but is that a bigger risk than some dumbass in a truck blowing a red light and plowing into my door? Based on where Tesla is at I don't think it is. They're already reasonably below the median risk we all accept for driving at all. After all, my butt can feel that the car is doing something seriously unexpected way before my eyeballs can. So I'm going to keep singing, with my thumb gently on the wheel and one eye on the road.

With all due respect, that is just as likely to happen with human drivers as well.

> is the insane amount of PR, politics and marketing talk by people who have ABSOLUTELY NO IDEA

I have seen this too much. People who have a deep understanding of how things work are busy learning and doing. People who can spent their energy with politics and marketing can do it because they are not busy learning and doing. I which there was a solution for this. Corporate IT departments are particularly full of this.

Predicting with All-Caps confidence that autonomous driving is "decades away" is at least as indefensible as overly optimistic predictions were.

I think autonomous driving advocates would do well to look at the history of computer handwriting recognition, an easier technical problem with lower consequences that received significant investment over decades. But it has never gotten good enough to succeed in the marketplace against alternatives.

Why? It never exceeded consumer expectations, which are extremely high for automated systems. Even a correctness rate of 99.9% means multiple errors per day for most people. Consumers expected approximately zero errors, despite not being able to achieve that themselves, sometimes even with their own handwriting!

Because handwriting is made by humans, there is some percentage of it that simply cannot be reliably recognized at all. But people hold that against computers more than other people because computers are supposed to be labor saving devices.

Likewise because roads are made by people, and other cars are driven by people, so a self driving car will never be able to be perfectly safe. But that is essentially what advocates are promising.

That’s especially true if people expect the same level of convenience, especially in terms of time. People speed and take risks all the time when driving, in the name of saving time. I think it’s likely that an autonomous car optimized for safety would also be a car that just takes a lot longer to get anywhere with.

Speed matters. It’s a big reason we all use touch keyboards on our phones instead of handwriting recognition.

> [comparison to hand writing recognition]

Excellent point, stealing that. I work in automotive and an engineer, traveled around the world, think the realistic possibility of self driving cars without major changes in how we make roads, everywhere, is extremely low.

My error rate in recognizing handwriting would be much higher than a free, open-source recognizer. I am very bad at recognizing other people's handwriting. I don't understand your need for 0 errors. Real life is full of errors and imperfections, there is chaos everywhere. Seems like you expect unachivable.

Handwriting recognition works very well if you can capture the actual strokes used while writing and not just the end result.

I doubt it. My handwriting is at least average neatness, and stroke based recognition systems still make multiple errors per sentence. It's just a frustrating waste of time and now that we have touch screen keyboards there's no longer any point to handwriting recognition.

The only handwriting recognition system which ever worked correctly with a low error rate was Palm Graffiti. It forced the user to learn a new shorthand writing style designed specifically to avoid errors.


The secret to Palm Graffiti's market success was that it hacked user expectations.

Because it asked users to learn a new way of writing, when the recognition failed, users were more likely to blame themselves, like, "Oh, I must have not done that Graffiti letter right, I'll try again."

But when it came to recognizing regular (i.e. natural) handwriting, users believed inherently (i.e. somewhat unconsciously) that they already knew how to write, and the machine was new, so mistakes were the machine's fault.

While we're sharing anecdotes, my handwriting is remarkably terrible, and the iPadOS Notes app does a good job of transcribing it.

I think this supports the grandparent's point about using the actual strokes, including angle and azimuth, to reconstruct intent.

I was also fairly proficient with Graffiti, back in the day, but I consider that an input method, not handwriting recognition. I was facile with T9 as well.

Analyzing the individual strokes works flawlessly with Chinese and Japanese, where the stroke order is fixed (occasionally with a few variants). If you have the stroke information and the user writes correctly you can recognize characters that even humans would fail to read from the finished glyphs.

That's great, but I would wager that nearly 100% of all writing ever done in human history was done without capturing the strokes while writing. Therefore, while this added accuracy is great, it is virtually useless for most written work.

Isn't that a bit irrelevant? If we are talking about patterns that work well for the user, clearly writing everything traditionally and then going back and taking pictures of everything is a cumbersome process. Writing on iPad or similar is clearly the medium in which this shines, at which point you do capture the strokes.

That only works if you can assume that everybody using the system you're desiging has access to the underlying technology. Sure, if you're desiging some new system (like an autonomous vehicle on a closed loop, controlled system / system purpose built to perform digit recognition as it is written on it, but why wouldn't you just have the user directly input on a keypad) then you'll get a better result, but in the general, real world, case (autonomous vehicle on city streets with other vehicles / recognizing digits from scanned input without the stroke data) then your special case optimization are impossible and for all general practical purposes do not apply, so appealing to their assistance in increasing accuracy doesn't actually do anything to help the system perform better.

While that's true, having the ability to capture strokes now allows machine-learning models to better determine what potential strokes were used to make a specific shape. Just because we didn't have it for everything doesn't mean it's not useful for adding accuracy to the past.

So by analogy, autonomous driving will work very well if we can capture all the roads as they're being built?

> "handwriting recognition... has never gotten good enough to succeed in the marketplace against alternatives."

There is little market demand for handwriting recognition, and thus little active research goes into it. Not because it is a difficult or problematic technology, but because better alternatives exist that make it irrelevant.

Even if someone were to come up with an absolutely perfect handwriting recognition system, most people wouldn't use it. Why? because the advent of multi-touch screens means that most people can type much faster than they can hand-write anyway.

This is all true today, but it was not true in the past. There was a time in the computing industry when everyone believed that pen interfaces with handwriting recognition would be a crucial enabler for highly mobile computing. Both Apple and Microsoft built major product launches around this idea in the early 1990s.

Oh, absolutely. I remember that era well. But I'm talking about today, of course.

What changed was that touch screens became better. The old capacitive touch screens were clunky, slow, inaccurate. You could put a keyboard on them, but the lag and poor accuracy meant you couldn't really touch type comfortably. Then multitouch came along and made on-screen keyboards much more responsive and accurate.

But also, Blackberry and (pre-smartphone) phones with SMS made people more comfortable with the idea of using keyboards for text entry on handheld devices. And crucially, auto-correct and predictive text entry covered up for accuracy errors and made text entry by keyboard even more attractive.

I wouldn't bet on it but I also wouldn't call it indefensible. Fully autonomous driving is a very complex problem with a very long tail. Being able to drive semi-reliably on American highways doesn't mean that you're almost done, not even close.

An other handicap for self-driving cars is that the problem is effectively harder at the start when the majority of the traffic will still be operated by human drivers who are a lot harder to predict reliably than an other autonomous vehicles.

Beyond that, I strongly believe that software engineering is still ridiculously immature and unable to deliver safe, reliable solutions without strong hardware failovers. We have countless examples of this. We simply don't have the maturity yet, we're still figuring out what type of screwdrivers we should use and whether a hammer could do the trick.

Having driven in Canadian winters, I honestly agree that reliable autonomous driving in inclement weather is indeed decades away.

The visual recognition needed is well beyond the systems today.

Isn't that just "image de-obfuscation" though? Seems like narrow AI will be able to out-class humans at that in no time. You can generate as much training data for that as you want. Doesn't really require human-type intelligence. Though I guess you might mean that the obfuscation makes the edge cases even harder, which makes sense.

There's like fifty caveats that go along with this statement, but this is the internet so I'm just going to skip all that.

Something like half your human brain is devoted to visual processing.

There's a tendency to think that things like language is what makes the human brain special, or our ability to plan or think abstractly, and we talk about things like "eagle eyes", but the truth is humans are seeing machines with most everything else as an afterthought.

The reason your cat will attack paint spots on glass for hours and flips the hell out about laser pointers is because their visual systems are too simple to distinguish between those and the objects that actually interest them, like insects.

Vision is not the easy part of AI.

> Vision is not the easy part of AI.

I think it is, actually. Going from raw pixels to objects is the (relatively speaking) easy part. It's the next part (using that for planning and common-sense reasoning) that's the hard part. Machine learning has already advanced past humans in this regard for many classes of problems - which is part of the reason why captchas are getting so hard.

This was several years ago, hence the move away from obfuscated text (which was getting harder and harder to read): https://spectrum.ieee.org/tech-talk/artificial-intelligence/...

I'd be surprised if basic perception tasks as human-ness tests last more than a few more years.

citation for "half your human brain is devoted to visual processing"?

Since the internet places no weight on things like "common knowledge to anyone in the field" or "I took a bunch of classes on the brain in college", here's a random quote from someone at MIT: http://news.mit.edu/1996/visualprocessing

I have a car (Honda Pilot) where the company decided to make the lift gate window too high, probably to accommodate mounting the spare tire inside the cabin. This design makes you dependent on the rear camera for most reverse use cases.

It probably made a lot of sense in the Southern California design center. In Upstate New York, that camera is covered in road spray and salt, and my brain cannot see anything or act effectively without cleaning it. Even after doing that, it will get dirty again after a few minutes of driving.

I’d guess that a least a few dozen people will hurt by this decision.

Take this problem to the self-driving car and things get even worse. You’re going to have a lot of problems with sensor effectiveness that cannot be magically fixed with software.

And I have heard rumors of lobbying going on to get the requirement for rear view mirrors dropped when video feeds are provided to replace the functionality.

It sometimes seems as if half the purpose of assistive driving systems serves to compensate for the absolutely horrible sight lines in a lot of newer vehicles.

Here's my go-to example about the challenges of driving in a Canadian winter.

I was waiting for my bus to work one morning after a large snowfall. The snow clearing crews were hard at work, but the street was effectively blocked by piles of snow, men, and machines.

Yet, my bus arrived on time *driving down the sidewalk".

I am not sure how any self-driving system could have figured that out :)

And if it did, hollow sidewalks are a thing in some places, so...

On what grounds?

I wonder why the self-driving car hype folks don’t simply lobby to make more trains.

(I mean the ones who have been successfully marketed to here, not the marketers).

We had self driving vehicles for 30,000 years. I’ve thought about just getting horse brains hooked up - but people probably don’t want their cars spooked by plastic bags.

How do you know the people bullish on self driving cars haven’t also lobbied for trains? There’s a lot more standing in the way of trains being built in the US, unfortunately. To me, self driving cars are just the second-best thing, and one that might actually happen. It’s a bit too late for trains to take over in most of the country.

Sorry for the out of the blue unrelated reply, but I am currently stuck working as a technical consultant at one of the Big4, how did you make your way out of this? I feel like I am learning about antiquated technologies, and that pivoting to a software engineering job more aligned with my interests, skills, and sanity feels harder every day. Even something like you describe you are doing now sounds much better than what I'm doing now. Also I know exactly what you are talking about with the marketing and PR talk it is insane.

Some folks I know joined code camps/ lambda school like programs and got out. Alrernatively, if you work on clients, easy way is to accept a full time job at any of your clients you enjoy/tolerate working.

Sorry, late reply.

I got out by never really getting in I am afraid. I worked for 1.5 years in technical consulting as a student job while getting my informatics degree.

Once I obtained my degree, I declined an offer by said Big4 firm and took another offer where I got to go hands-on with coding. I had previous coding experience which helped and then amazing colleagues who boosted my start.

Yeah but speaking as someone kinda on the other side there are things like this 'actual Level 4' delivering stuff for real in Wuhan https://kr-asia.com/jd-com-uses-l4-autonomous-driving-soluti...

Fair enough it's not very good - that one just went 600m - but it's hard to argue it won't exist for decades when it exists now.

And historically going form sorta works but is rubbish in info tech eg. early cellphones, internet and so on - to works well doesn't seem to take that long. Five to ten years perhaps typically.

There's no reason to believe that's a level 4 autonomous vehicle, other than the marketing release of the company that makes it.

Going from early cellphones to smartphones was an engineering problem. All the technology was already available and it was a case of putting it all together in a way that worked and that could be manufactured at scale and for profit.

With vehicle autonomy, the problem is that we don't know how to create autonomous AI agents yet, so we don't know how to make a car driven by such an agent. Claims of level 4 autonomy should, at this point, be treated like claims of teleportation or time-travelling back in time: theoretically possible but we don't know how to make it happen in practice.

The issue is that there are so, so many edge cases to worry about. Simple example: someone decides to troll you in your self-driving car and steps out in front of it. You don't have controls in the car, so you can't go around them - you have to wait for them to move.

In reality, it seems like it would resolve quickly - you get out and yell at them, call the police if that doesn't work, etc. But it can get more sinister - criminals _already_ block the road to force drivers out of their car to rob them[1][2]. Now, if you know that might happen, you can just drive around the obstacle. Unless, of course, you're in a self-driving car where you might not even be able to get it to do a u-turn. Related issues would be areas where the practical advice is "don't stop" - not even at red lights - if you're there late at night due to the risk of car jacking[3][4] (this might be out-of-date now, to be fair). Can rules like that be encoded into a self-driving car?

OK, yes, you probably could find a way to do it. But that's almost certainly just the tip of the iceberg in terms of "ways people will fuck with self-driving cars" and "things people do that are technically illegal but still safer than the alternative." Could you solve enough of those in 5-10 years, _on top of_ making self-driving cars work in sun/rain/snow/fog/night/tornados/etc safely and consistently? I think that's very unlikely. Decades seems far more likely to me.

[1] https://abc7chicago.com/593111/

[2] (Non-EU only) https://wgntv.com/2015/03/31/robbers-set-up-fake-road-blocks...

[3] https://eu.detroitnews.com/story/news/local/detroit-city/201...

[4] https://www.reddit.com/r/AskLEO/comments/2rzsdz/are_there_an...

Have you seen the roads in Alabama? They have this weird, red asphalt and rarely any edge lines. In Louisiana they have these elevated roads over the bayou that don't seem to have break-down lanes. In Tennessee they have roads that go through mountains with shocking curves and gradients. Pot holes, broken lights & signs, weird parking lots, new construction, and seeing a small ball roll near the road and knowing that a kid might be coming after it soon.

It seems like it would take an endless list to cover every new edge case. Our technology is a amazing, but I almost think the edge case is places where autonomous driving makes sense.

The thing I trust the least is the operators they want to put in these cars. They better be completely autonomous, self-maintaining, and somehow tamper-proof. It's a really tall order, which I hope we do fulfill one day. But maybe I'm a pessimist, and they have it all figured out already.

Re robbers, car jackers and the like in some ways you may be better with a self driving car as they seem to be covered with cameras and report back to base the whole time so the crooks would be photographed and the cops called. Already the Tesla cameras have caught a few https://www.youtube.com/watch?v=JqBWt9rRx-U

Re all the edge cases - yeah that'll take a while.

There was a movie which featured autonomously driving trucks, they were held up and stolen from by the bandits putting a cow in front of them, then just taking the cargo while it was stopped.

You can certainly have all the cameras that you need but if the bad guys have their faces covered and identifying marks hidden then you're not going to be able to do much.

> There was a movie...

I'm pretty sure there is an early scene in the movie Solar Crisis[1] that plays out similarly to what you're describing. This movie was on one of the cable movie channels when I was growing up, so I got a higher-than-normal dose of it.

I don't remember a cow, though (but then this probably isn't the only sci-fi movie out there with such a scene). I think one of the characters first parked a motorcycle in the road, but the truck plowed through it. After that, they stood in the road instead, and that caused the truck to screech to a stop right in front of them, blasting a message on a PA about how they were breaking the law by impeding it.

[1]: https://en.wikipedia.org/wiki/Solar_Crisis_(film)

Yes, that’s going to be very helpful to the investigators when they finally figured out what happened to the car and to your body. Might not be helpful to you, though....

License plate can be covered (or more often - using stolen car) and balaclavas cost just few dollars.

If it is know it is stolen a hi tech truck can be remotely ... of course there is another kind of risk. But if one just ensure only one guy can open the container or the container is fixed with pin certain distance from a driver etc.

Ther is always a way.

Five years ago or so, video from "DARPA robot challenge" type events would easily give the lie to "superb autonomy" claims. I just found this[1] more impressive, but at its 20x playback speed. I imagine that playback at 1x could still serve as a reality-check and counterexample.

[1] https://www.youtube.com/watch?v=v6-heLIg85o

You think that's bad? Listen to sales types talk about "blockchain" sometime.

note that "gradient descent" isn't AI either. it's more computational linear algebra: a heuristic for numerical methods used to solve (usually to a local extrema) systems of equations without direct analytical solution(s).

> Forget better frame, chassis, and body panel design to protect pedestrians! We won't need them if AI never hits anyone.

Most car manufacturers have had this figured out for a long time with crumple zones and the like.

> Forget better braking systems that apply themselves automatically

Assisted braking technology is already implemented in some cars. Hell, Tesla implements basically exactly what you're asking for...

> Forget seat belt enhancements since that'll just inhibit nap time in my self driving car.

Teslas don't let you sleep in your car, you have to move the steering wheel periodically to prove you're still paying attention or it'll pull over and shut down.

Also, I'm not quite sure what you're expecting seat belt enhancements to be.

Far be it for me to defend the AI hype, but your "things we should be focusing on instead" don't make much sense when we ARE focusing on them.

Modern cars are deadly to pedestrians, especially SUVs.

Reading through the crash test procedure, it is astounding how little attention is paid to pedestrians.

1. Front crash test. Procedure: Crash car into stationary barrier at 35 mph. Is also applicable to face-to-face crash with car of same size, going at same speed.

2. Side crash test. Procedure: Slam concrete block into side of stationary car at 38.5 mph.

3. Side pole test. Procedure: Drag car sideways towards a pole.

4. Rollover resistance. Procedure: Compare the cars footprint to the height of the center of gravity.

The biggest thing to notice is that not one of these metrics involves pedestrians. Metrics 1-3 can be easily improved by making a bigger car, elevating the passengers and providing more crumple room. Metric 4 is unaffected, as the track width is increased to compensate.

If a low sedan hits a pedestrian, the pedestrian rolls over the car, having a lower impulse given over a longer period of time. If a high SUV hits a pedestrian, the pedestrian is knocked back, having a higher impulse given over a shorter period of time. Safety ratings need to account for the danger cars pose to others.

Source: https://www.nhtsa.gov/ratings

Source (SSF): https://www.safetyresearch.net/rollover-stability

European safety rating has a category for "Vulnerable Road Users":


> Reading through the crash test procedure, it is astounding how little attention is paid to pedestrians.

In the U.S. at least, pedestrian safety concerns mainly affect prescriptive legislation (i.e. no pop up headlights). Some countries and blocs have testing similar to crash tests, but I'm not really sure how effective something like that is: any meaningful standard would need to have exceptions for different categories of vehicle. Though honestly I can't see that much can be done about pedestrian safety once your vehicle is colliding with a human being.

This a trap we all fall into. Because you are smart but don't understand that a thing could exist, doesn't mean it doesn't. We often use this crutch when absolving someone else of an action taken or a design flaw. "I would have never thought of that!" or "How could someone have anticipated that?"

> I can't see that much can be done about pedestrian safety once your vehicle is colliding with a human being.


"Though honestly I can't see that much can be done about pedestrian safety once your vehicle is colliding with a human being."

I don't agree, there some definitive choices in car design that affect the aftermath of the collision that can have effect on the pedestrian surviving. As pointed above if a pedestrian is hit by a car he has a better chance to roll over the hood of the car vs an SUV where the pedestrian would probably would be hit and fall under the car.

I imagine any multi-ton mass moving in excess of certain speeds will be deadly to unprotected soft-bodied organisms. How do Modern cars differ from non-modern cars in this respect? We've added better brakes, back-up cameras and object detection to avoid running into people, hopefully reducing the number of incidents, but yeah you hit someone with a car moving at any appreciable speed and it's gonna do damage.

Look at European pedestrian safety regulations. There's a reason that the shape of the front of European cars is all kinda the same - they are designed to minimise pedestrian casualties.

Obviously no one is going to survive if you hit them at 70, but you can make a big difference in the 25-35 region that is the normal speed where there are a lot of people around.

Most 1-3 ton objects moving with any sort of momentum are. What is your point?

70% more likely to be killed when struck by a larger car [1]. With a higher front face, pedestrians are struck in the chest, rather than the legs. Turns out, broken legs are a lot more survivable than damage to internal organs.

[1]: https://www.theguardian.com/cities/2019/oct/07/a-deadly-prob...

By that logic, a low slung sports car is the most pedestrian safe and Ferrari's should get a pedestrian safety rebate. I'll take one!

So long as there is sufficient space between the hood and the engine, which tends to be the main issue with very low cars. Sheet metal deforms as a pedestrian bounces off of the hood, increasing the interaction time and decreasing the instantaneous acceleration. This requires at least 10 cm between the bottom of the hood and the top of the engine. Less distance, and a pedestrian instead bounces off of the engine block, which doesn't deform on impact.

I was hoping for something mid-engine!

Crumple zones are designed to protect the occupants, not anyone external to the vehicle. Granted, the kind of basic design changes that would pretty obviously help with pedestrian harm are also.....not sexy. So seems unlikely car manufacturers will sacrifice too much on the aesthetics front when car ownership is such a status symbol.

Count me guilty then, because I am also hoping to one day take a nap while the car drives me around. Until then, I found the next best thing is booking an Uber. They'll even use venture capital to subsidize my ride :)

But wait, isn't that exactly why they are now in such a rush and cannot accept anything less than full autonomy?

From what I heard, Uber has been burning billions of VC money to capture market share. And their model just won't work financially if they need to pay drivers a living wage. So they attempted trickery to pay them less as "independent" contractors, but now that governments are stepping in to prevent that, there is only one option left:

Uber needs self-driving cars or else they'll go bankrupt.

At least, that's my theory.

I'm not certain self-driving cars are going to save them. Right now, what they pay their driver's is the all-in cost for their fleet. When they switch to fleet Management, they are going to be paying capital costs and maintenance instead. I don't think the latter costs are much lower, if at all, than what they pay drivers now.

There is this other technology called "chauffeur"

And in the third world (e.g., Mexico), they are an order of magnitude cheaper than a Tesla.

> Uber needs self-driving cars or else they'll go bankrupt. > At least, that's my theory.

I believe it--at least, I hope it would take an existential threat to make them push kludged-together pedestrian-killers into use on the roads.

> So they attempted trickery to pay them less as "independent" contractors

An Uber driver is the very definition of an independent contractor. Many drivers also drive for Lyft. How can they be “employees” of Uber while also driving Lyft? Or doing Door Dash?

If Uber drivers were employees, they would have to work where and when assigned. As it is now, Uber drivers come and go as they please.

I am not sure how an Uber driver is any different than a freelance journalist or musician.

You are aware that people can have multiple jobs, right? It's very common for poorer people especially

And those usually have set, non-overlapping hours, even if those hours can vary from week to week. An Uber/Lyft driver can switch back and forth from ride to ride and has no set hours at all.

I'm very excited about the possibility of self-driving cars. Let the car drive while I take a nap or read a book.

But I have a hard time believing that the technology is anywhere close to being mature. You need a lot of contextual knowledge to drive safely in unusual circumstances. I totally believe that within well-defined limits, AI already outperforms humans, but traffic has no well-defined limits. Anything can happen.

You probably could get self-driving cars if you really wanted to, provided that the self-driving traffic doesn't mix with regular traffic. And ideally without humans in the self-driving cars.

Not sure that's what the self-driving cars proponents envisioned though.

Let the car go park itself while I walk into the restaurant. Let the car go fill itself up with gas at night while I am soundly asleep.

> Let the car go fill itself up with gas at night while I am soundly asleep.

I'm bearish on both self-driving and the universal adoption of electric cars, but everyone will be plugging their car in at home long before they can make one that drives itself to the gas station.

>everyone will be plugging their car in at home

Absolutely not. Having to wait for hours to get a few kms of driving distance is way too much of a friction point for EVs to ever be more than a novelty, in addition to the usual complaints of people with only street parking, garages without outlets, etc. Either we'll get the battery-swap situation rolling or invent a faster charging tech, but either way there'll be some sort of "station" in the picture.

> everyone will be plugging their car in at home

I realize that a lot of people here are privileged enough to own a single family home, but the majority of humanity lives in apartments and parks on the street. Trickle-charging at home is not a universal solution. The only practical solution seems to be some form of rapid charging of the car's energy storage. Either by pumping huge amounts of amps into a huge battery pack, or adding some kind of chemical fuel that gets reacted in an internal combustion engine or a fuel cell.

Slow charging still works fine overnight, even if you slow charge on the street, instead of on your own property. Of course it would be nice if every parking spot came with rapid charging, but it's not like that's the only solution.

At the moment, policy in Amsterdam is that if you own an electric car, you get a charging point in your street. I don't know how fast those are, but they don't have to be fast. They're still useful for overnight charging, especially if the city continues to add more when more people get electric cars. I don't understand the argument that this is not in any way a solution. It is.

yeah, everyone is missing my point with this statement which is pointing out that it's never going to happen

I often fall asleep on the bus, while reading a book.

It is a religion, nothing "faux" about it. It fills the same needs, and uses the same mechanisms within the human mind as more traditional religion.

It is also completely bonkers, as it makes bold assertions about the real world, with which reality will eventually disagree, whereas older religions tended to keep their most dogmatic positions unfalsifiable (the afterlife, the soul, vague prophecies, ...)

Calling it a religion makes about as much sense as calling efforts to cure AIDS or vaccinate Polio prior to the success of Salk a religion. There is a clear and obvious good to achieve, one which is theoretically implementatable and has predecessors for success. Reality can only disagree if it is outright proven impossible due to something - say in this counterfactual Polio has a 5% of turning any unique adjacent molecule it into more polio. otherwise they are literally Just like how taking the spit of a polio patient and putting it in saline with a sprig of mint will spread instead of vaccinating but it doesn't prove a polio vaccine is impossible because "we tried and people got sick".

It is clearly a goal and a realistic one within even a few decades pessimisitcally given that its ability is creeping upwards.

There is zero proof that autonomous vehicles can be safer than human drivers.

There’s lots of proof that autonomous vehicles are technically possible, but the leap to “definitely better than humans” is a very big one and it’s really being taken on faith right now.

In contrast, treating disease directly affect the incidence of that disease.

There's "proof" that computers can be safer than humans. Faster reaction times, don't get tired, don't lose focus, can perform computations much faster than humans.

All that goes to show that a computer can make mistakes much faster than humans. You didn’t say anything about how that guarantees the computer only makes safe decisions.

But that's kinda the whole point of AI, isn't it? It's a circular argument, computers currently can't make safe decisions, therefore they will never make safe decisions.

Computers don’t exercise judgment.

The point of AI is to have computers exercise some form of judgement. What is there to suggest that they can't do that?


Auto breaking technology deployed in japan has already started affecting insurance because so many vehicles have it that the accident rate has fallen significantly. This is one facet of autonomous decision making.

I mean fully autonomous AKA Level 5. I agree that there is plenty of proof that driver-assist safety technologies can prevent accidents.

What's really annoying to me is that we've been sold computer vision as the current solution, when it's obvious that if it is the eventual solution it's a long time out because our understanding of the field just isn't nearly as strong as it needs to be, but it's edged out a lot of interim solutions that could have been a lot safer, even if they have downsides.

For example, we could have embedded special markers (or spikes with RFID, or more active wireless, or any number of things that could have provided far more accurate lane detection as long as we were willing to require some up-front work to deploy it for special routes. Combined with very reliable lane detection and restricted to specific deployed areas where it could be tested, computer vision and/or radar/lidar for vehicle and large object detection (which would be mostly sufficient for most highway/freeway use) could likely provide a very safe system. The lower requirements for achieving safety might mean we could actually get some busses outfitted as well.

But that would require some actual state action, as no private company would (or should, if they were to keep it proprietary at least) deploy along large stretches of highway freeway. Covering I-5 from Northern California to Southern California would provide many opportunities, but be an enormous cost.

SAE Level 5 is completely transformative, it will utterly change the market and the use of cars. Once you can conveniently call up an automated Uber the need, at least in cities, to buy your own car will effectively vanish.

This is Uber's endgame - get humans out of the loop. As long as cars still need human drivers this cost savings can't be realized.

> Forget better braking systems that apply themselves automatically.

This already exists: Mercedes, for example, first rolled out "Active Brake Assist" to a production model in 1996. Moreover any fully self driving vehicle will definitely need to be able to apply the brakes.

Why is it transformative? Especially in cities. You have Ubers/taxis/private cars/etc. today. So you hypothetically cut the costs (maybe) in half of hailing a ride. Which is speculative. Does that really transform your use of transportation? I doubt it would change my use of cars one bit.

There are tipping points. If always taking the taxi everywhere is more expensive than the cost of owning your own car, then most people drive their own cars and taxis are for special occasions. If after a tech change always taking the taxi everywhere is less expensive than the cost of owning your own car, then most people would do just that, and only a minority would own a car.

A switch from 20/80 to 80/20 is transformative, changes the default attitude and has further societal effects.

> has further societal effects

Like parking or more specifically - not parking. Here's an example development: "And parking clocks in at a full 29% of the developed land here, taking up twice as much total space as the actual buildings."


Parking may be reduced but the flip side is that we will have hoards of always cicuclating cyber cabs or we’ll have private owners returning and summoning their private vehicle from free parking at home, thus doubling the trips taken

Not to mention sending a car out for errands. Self driving will free up parking garages, but fill up the roads.

Uber has no relevance once level 5 exists. Vehicle manufacturers will run the vehicles directly if there is any profit to be made from a taxi service.

I doubt it. Vertical integration only works so far.

I think the manufactures will sell cars to those want to run a taxi company. This is, and will remain a race to the bottom because competition is fierce: people buy price and first to get to them.

They will also sell cars to normal people. Most people don't think about how convenient it is to have the storage of their personal car. If parking is free or cheap (which is it in the suburbs) having your golf clubs in the trunk or a spare diaper in the glove box are worth the little extra cost, not to mention your car is always there so no need to wait for the taxi to in busy times.

Of course there is also the city, but if you live in a city self driving cars still suffer from congestion and so public mass transit still has big advantages. In fact because the self driving taxi is spending time empty waiting for the next rider it adds to the problem meaning for even more people public transit is worth the hassles with it (which in turn means more demand to make transit better)

> I think the manufactures will sell cars to those want to run a taxi company.

A self-driving car is software and hardware... you can sell hardware but software only gets licensed. Software is really where the value is, and that won’t be owned by anyone but the company that made it.

Look at the history of mobile phones, which were originally sold to consumers only by the carriers. As the phones got better and better software, the business model attached more and more to the manufacturer.

Congestion problems are dramatically alleviated if you can convert your city over to self driving only. Once the cars can drive in platoons they are packed in tight, no more accordion effect.

On the way something like UberPool would be needed. Or the taxi can just drive me to the nearest train station and I can take mass transit into the city.


Platoons help, but there is only so much space on the roads. In short even allowed 5 times as many cars won't put New York subways out of business. Environmentalists will still want to put everybody on the bus (pick any mass transit technology) even in smaller cities.

> Once you can conveniently call up an automated Uber the need, at least in cities, to buy your own car will effectively vanish.

I doubt it. One of the biggest obstacles you have is families and the need for infant car and child booster seats. Car ownership is here to stay for a while.

What's the problem with families? My local Uber-equivalent will have and provide child booster seats if requested (as did many but not all "pre-app" taxi service providers), and for infants there are also a bunch of solutions depending on their age; one of them (perhaps not the best, just anecdotal example) is the combo carriage+car seat, where I can put the "carriage wheels" in the trunk and put the baby in the car safely without even waking them up, crucially, it can be done in any car or taxi, and I've taken taxi rides with a baby this way.

It does require some planning (and some support from the service providers) but it's definitely a solvable issue, if the people would want to do that, then it can be arranged.

I suspect that most of the people ready to ditch car ownership don't actually use cars much. In addition to the families/kids gear, a lot of people I know have their cars/trucks setup for various types of outdoor activities such as carrying canoes.

How likely is Uber to kit their entire fleet out with snow tires in winter? That’s an easy example of something a lot of private owners do for safety during the winter. I did that so I could drive to snowed-in trailheads. What about chaining up? And how will the self-driving cab handle getting stuck in the snow? Will there be sand available?

Even without doing any real off-roading, it's one of the things I always feel a bit uncomfortable with when I take rentals to relatively remote areas out West. If it were my own vehicle I'd carry a lot more gear to handle potential problems than I realistically can carry on a plane and throw in a rental.

> Once you can conveniently call up an automated Uber the need, at least in cities, to buy your own car will effectively vanish.

As a passenger, there is no difference to me whether my car/bus/train is self-driving or not. As long as I'm not driving, it doesn't matter if a meat, or a silicon neural network operates it.

Given the above, how does this make self-driving a transformative technology?

In addition to all the hype-sters and scammers, you're probably seeing a lot of the often young urbanites with dreams of never owning a car and getting chauffeured around by super-cheap robo-Ubers all their lives. Many of them are either in denial or are feeling pretty betrayed at the moment.

That said, there are legitimate issues with incrementally improving assistive driving. People text and drive today without assistive driving. If a car can mostly make its way in rush hour down the highway autonomously, does anyone think that people won't routinely watch Netflix on their commute?

Exactly. For anything but open highway, self driving is so far into the future, that we should be focusing on making smaller pieces safer, ultimately building up to the end goal. If there is a self driving car in my lifetime that will drive up the windy mountain road to Yosemite, in the winter, then I'm the Pope. This all or nothing shit has to stop.

I honestly don't understand why there isn't more attention paid/focus on the fully autonomous limited access highway driving in "good" weather use case. That would be hugely attractive for a lot of people. Read a book/watch a movie while heading up to the mountains for the weekend or, for many, take a nap for part of their morning commute.

I suspect it's because it's less interesting to the demographic that's more concerned about being driven to and from bars on their night out or who just don't want to own a car period. But highway driving seems like a huge convenience and safety enhancement even if you just punt on city driving for at least the next few decades.

I'm not sure what you are saying actually maps to reality.

Frame and chassis have never been safer and manufacturers continue to improve. Many (most) new cars have automatic emergency braking that continue to improve. New cars seem to have an ever increasing number of air bags to protect passengers.

All these things are happening at the same time that self-driving is taking place. Tesla FWIW is pretty good at all the above despite their focus on self driving as well.

I agree it is just as ridiculous to say we need level 5 to be make something useful. Will it be decades before we have cars without steering wheels? Sure, maybe even longer. But what exists is already pretty great in most environments and only getting better. (IE, crawling along in a traffic jam at ~15mph is something I would really love to never do again and it seems self driving systems can handle this with aplomb these days)

All of the other solutions you mention is where engineering resources are actually going, save for maybe the seatbelt enhancements. Every new car effectively comes with pedestrian crash prevention[0]; also, Volvo had a car with a hood airbag in 2013, and it looks like other auto manufacturers are looking into it[1].

Self-driving is only as hyped as it is due to the futuristic lure of the idea.

0: https://youtu.be/6owYPHpmDLU 1: https://www.autoblog.com/2017/12/13/patent-gm-external-airba...

We could have achieved "self driving" in a sense, even back in the 80s, if we had the will to spend billions (maybe over a trillion) on putting monorail power line tracks on roads, having designed electric cars that latch onto them, and have a national traffic control system that controls/drives them. The traffic control system would have perfect knowledge of where every car is, and could direct traffic in a highly optimal manner from point A to point B for each vehicle. The vehicle's themselves would have systems in place that prevent collision (similar to the system that NYC subway has). Altogether, it could have been achieved, but would have required an unprecedented level of public spending.

> This whole self driving, fully autonomous thing has become some sort of strange faux-religion.

Just follow the money. The near future financials of companies like Uber and Lyft (and to a lesses extent Tesla), rely on fully autonomous self driving.

And that is why they will fail. You heard it here first.

technology has made good progress improving safety, to the point that it's mainly a social problem at this point. we've allowed cars to be all sorts of things (status symbols, entertainment centers, etc.) other than transportation devices requiring high skill and attention to operate safely, at the expense of life and limb.

for cars to be safe for drivers and riders, we need to optimize two things and strip away the rest (especially an over-reliance on technology as savior):

1. minimize distraction and maximize attention on the act of driving 2. maximize the skill of the driver in controlling the vehicle in all sorts of (unexpected) conditions

technology can actually reduce safety, either because it allows drivers to pay less attention or it lowers the skill bar. driver assist technologies--lane assist or automatic braking--fall into this category.

that's not to say safety technologies shouldn't continue to be developed--structural crash safety improvements, for example, don't have the same detrimental effects on driver attention or skill (with the caveat that ever-increasing weights can decrease control and increase lethality).

it's important to distinguish technological advancements that acually improve safety rather than our perceptions of it.

Comma.ai kind of follows the "improving lane detection until it drives you completely" line, at least according to what I get from this interview with Hotz: https://lexfridman.com/george-hotz/

He's still a bit too optimistic for my taste.

I bought in to Comma.ai at first and even started porting my car over. But I quickly realized that the whole comma.ai crowd is way way to cavalier with safety. One day I was driving down the freeway and saw a small fender bender occur in front of me. I looked up at my dash and it was all ripped up with wires hanging everywhere because I was working on the Comma.ai stuff. If it had been me in that fender bender you know that insurance would definitely blame me for that given that I was messing with the OBDII and radar sensors.

As cool as Comma.ai is, I really believe that their approach to allowing so much community involvement with little to no oversight is highly irresponsible.

That being said.... If they do succeed, and get some sort of government approval or oversight.... You bet I'm putting that stuff back in. Its cool A.F.

For selected use cases, self driving makes a ton of sense to me.

example: stop-and-go traffic - instead we could unlock millions of hours of human productivity (or provide entertainment).

example: self-parking and come-to-me, esp in closed garages. Parallel parking is hard for humans and we're poor at space utilization.

example: environments where obstructions are unlikely... airplanes have had auto-pilot for a long time... why not highway 80 in the middle of nowhere? why not trucks queuing to load/unload containers at port (or conference centers) - just drive up your incoming truck, grab your personal effects and take over the next outgoing truck while it queues for hours delivering a load and getting the next load.

There's lots of uses for self driving vehicles even before we deal with the hard cases. But they're not sexy and of course a tiny fraction of the labor savings and freedom-making.

> airplanes have had auto-pilot for a long time... why not highway 80 in the middle of nowhere?

You don’t get a stalled car on a Victor airway in the sky. You also don’t have to worry about obstacle avoidance for the most part, in the sky. If an aviation autopilot can’t hold the altitude or heading (such as in turbulence or in mountain up and downdrafts,) it will simply keep the wings level. Airplane autopilots follow explicit instructions: fly heading 143 at 14000 feet; descend at 500 fpm. Hold over a VOR using 1 minute legs at 200kts.

A car autopilot on the other hand, has to react to the physical surroundings. Not only “follow the I-10 at 75mph,” but also, watch out for incoming traffic, lane closures, or some kid on a bicycle that wanders into the road, or a dead animal in the road, or wet roads, icy roads, etc. There is no such thing as Instrument Flight Rules for driving, meaning a car autopilot has to be aware of the visual, while an airplane doesn’t: it just flies the precise route programmed without any awareness that the route might fly through a flock of geese. An airplane autopilot will fly you right into the ground if you let it. There is a lot of skill and training around airplane autopilot, and while it’s amazing and useful, it’s a lot more than simply turning it on and it flies you automatically to Denver.

Actually one of the biggest obstacles to Level 5 is expectation of Zero Deaths for Level 5 autonomous operations. The standard should be fewer deaths than human-operated (or even auto assisted ) cars.

When you have thousands of machines traveling in close proximity at speeds exceeding 50mph there will be deaths, this is unavoidable. We need to reduce those as much as possible but to demand ZERO before the technology can be used is just unrealistic

That said, Just because some people are working towards Level 5, does not mean all of the other things you are asking for are not also being worked on, it is not a zero sum game. There are enough people that we can have teams working on both.

This complaint is repeated for everything, "Well if people were not working on X drug that I don't care about they could cure cancer"

We can have a better braking system, better frames, etc AND still try to achieve level 5 autonomous driving. It is not an either-or proposition

One of the problems with merely fewer deaths than a human operated car is that technology tends to fail in 'silly' ways.

Also at the very least an self-driving car should reach the level of a good driver, having self-driving cars cause as many deaths as drunk or inattentive drivers do nowadays isn't defensible. Especially since there's usually no explanation and nobody to hold accountable.

> technology tends to fail in 'silly' ways

And a scenario we can easily imagine is that a buggy update goes out to the whole fleet overnight that starts killing people all over the place.

The common case of accidents being on par with manual human driving goes out the window until the software is rolled back and for 12 hours, 24 hours, however long, we get a number of deaths that far outpaces what humans are capable of. The "worst case" would never apply to a manual/human population as a whole, at once.

This might be one of the few cases where it'd be better to not try to have all devices have the latest update all the time.

>nobody to hold accountable

Well, one of the issues is that someone more or less has to be held accountable. And that someone pretty much has to be the manufacturer. No one is going to hand over full control to a vehicle and accept the responsibility if that vehicle commits vehicular manslaughter because "software isn't perfect."

It's actually an interesting legal situation because, other than maybe drug side effects, there aren't a whole lot of consumer products which, properly used and maintained, sometimes randomly kill people and we're OK with that because sometimes stuff just happens.

What about simple speeding tickets? According to Wikipedia[1], Tesla Autopilot max speed is a whopping 90 Mph!!!

Who's responsible if you get pulled over for going 75 in a 65 mph zone?

[1] https://en.wikipedia.org/wiki/Tesla_Autopilot

>The standard should be fewer deaths than human-operated (or even auto assisted ) cars.

How do we go about testing this? By tallying up autonomous deaths until there are fewer per year than human drivers?

>We need to reduce those as much as possible but to demand ZERO before the technology can be used is just unrealistic

Human driver skill varies immensely by person. The idea that anyone who is (or even considers themselves to be) a "good" driver will never accept "average death rates" as a risk when getting in an autonomous car. I know I wouldn't.

The goal has to be zero or it will never be accepted by the public.

Everyone thinks they are "good" drivers, even the person weaving in out of lanes, even the person that has a car that has 500 dents all over it from previous impacts that were all caused by "other bad drivers not me"

What will happen is human-controlled cars will become $$$$$$ to ensure once Level 5 is better than humans. At that point, if you can afford it sure you can reject it but get out your wallet

Why should insurance be more than today? Unless you're arguing that safety systems in other vehicles make you driving one without those systems more dangerous.

Insurance is about risk pools, if Level 5 becomes a reality the risk of a human driver will go up, and has more and more people adopt level 5 (which they will contrary to what people on here think) the number of human drivers to spread that risk over will go down, small risk pools with increasing amount of risk means higher premiums

It is laughable that you don't think the goal wasn't already zero. It has been zero the whole time.

Real life and ideals are different thing. You can't promise that accidents will never happen. But you can promise that accidents will be substantially reduced.

In the US, about 35k people die per year from motor vehicle related deaths. If you get it down to 10k, then that would be a major success. Of course, you will still be fine tuning until you could get below 1000 and as close to 0 as possible.

> In the US, about 35k people die per year from motor vehicle related deaths. If you get it down to 10k, then that would be a major success.

If we were actually serious about reducing motor vehicle deaths, we would mandate that every car be equipped with a breathalyzer device. No fancy new technology is necessary, and there's plenty of low-hanging fruit (Impaired driving) that we can deal with.

For some reason, though, the religion of autonomous driving does not consider this as a solution to minimizing road fatalities.

To take this further, it's been shown every year the numbers are released, that of that ~35k motor vehicle related deaths, ~20k are alcohol related.

On average, humans are actually pretty good at not dying in motor vehicle related accidents - or avoiding them altogether, given the sheer number of miles traveled per day in the US.

That, however, just isn't the narrative Self-driving followers want everyone to know.

The goal is always zero, but we all know that will never happen. Nothing is perfect. And assistance technologies may be more dangerous than level 5 because we physically cannot maintain concentration when few actions or decisions ar required from us. Some studies even indicate manual tranamissions make us safer, possibly for that reason. When level 5 is available, and it will be, because forever is a long time, good drivers shouldn't have to buy those cars. Assuming money is still a thing. Insurance comapnies may start forcing people financially towards sel drive only cars by hiking rates on humans, but if you give up your privacy with a driving monitor and they asses that you are safer, they would probably rather you drive and waive the hike.

It isn't hard to show an order of magnitude less deaths even with non-zero deaths. At that point expect government to mandate level 5 on all cars.

> Forget better frame, chassis, and body panel design to protect pedestrians! We won't need them if AI never hits anyone.

> Forget better braking systems that apply themselves automatically. We don't need that if AI can always avoid the need for sudden stops.

> Forget seat belt enhancements since that'll just inhibit nap time in my self driving car.

Source? I never seen anyone arguing theses things for the sake of self driving. Are you just assuming that because people that want self driving really want self driving and the auto industry couldn't possible works on 2 things at the same time?

Self-driving? It's called a bus.

Hey man, I want all of it, especially while on the way to full self-driving cars.

Would Tesla be a heretic in this religion you just modeled? They're all about boring driver assistance, and acceptance of a low number of deaths. They don't even try Level 5 for now, and get a lot of criticism for requiring that the human pays attention at all times.

No they get criticism for calling the whole thing "autopilot" and presenting it as if paying attention was optional.

Even my engineer colleagues still operate under the impression that Tesla Autopilot == Self-Driving. One of the dangerous things, to me, is a combination of PR and user experience. The PR/advertisement creates the impression (directly or indirectly) that a Tesla drives itself, and the technology enforces the sentiment by working well 'most' of the time. This article just enforces this. The driver killed was an engineer, who noted that Tesla's auto-pilot would veer into danger (in the spot he was killed), filed a report/complaint, yet still, had enough confidence in the tech to read a quick text.

> and get a lot of criticism for requiring that the human pays attention at all times

Because that's the law... and...

> They don't even try Level 5 for now

That's not what Elon has been telling us for years... Full Level 5 is just months away!

IMO, flying cars will eventually arrive.

Perhaps it will be limited to specific "lanes" (it will be much more palatable to the masses if it must (like cars) keep to a limited area). But it will not need to recognize pedestrians and bikers and human driven cars, and some standard will be introduced to allow all them self driving cars to talk to each other.

At that point, Level 5 will be much easier, even obvious. All he effort invested in assistive driving will seem silly.

"But it will not need to recognize pedestrians and bikers and human driven cars"

If they want to land at some point presumably they have to avoid landing on these things?

As well as trees, birds, powerlines, antennae, and even locusts!

Flying cars might or might not happen (see other discussions here). However they will never be more than a niche. Airplanes need far more safety space than cars. For a car you need a few dozen meters to the cars front and back, and less than 1 side to side. For airplanes it is thousands of meters in all directions, and you are limited as to height before you run out of atmosphere. So even thought planes run in 3d space, there is in practice less space for them than the few roads in a city.

Humans and bikes need even less safety space than cars.

I don't think safety space is a good criterion for predicting how widespread a means of transportation will become.

A fast mode of transportation, with a large safety space requirement, may be more efficient than other modes and/or become popular.

The point is there isn't enough space for all the [single occupancy] cars to become airplanes. Speed doesn't change that because speed increases the space needed between airplanes.

Now if we change to flying buses we might be able to pull something off, an express bus that picks up people for a few stops in the suburb for 10 minutes, then flys at >150mph downtown is a very compelling competitor to a car and will get anybody who currently drives 20 to minutes downtown to ride if the cost is reasonable (those in closer in suburbs will still drive). I don't think the business or environmental models work out.

>IMO, flying cars will eventually arrive.

Not so sure. I saw specific designs hyped in the 80s and 90s, and know of efforts hyped in the 70s as well. The reason they always fail is because constraints of designing a car to go on land and the constraints of designing a flying machine are different. You can sorta kinda build something that does both, and people have, time and again, but it will be good at neither of those things. Well, unless we develop "antigravity" or something...

Add the possibility of huge damage caused from a failing/failing/crashing flying car, not just to the road and other cars there (like with a car) but to any building, group of people, etc. If it was a car-replacement (and thus getting on it was laxer than flying a plane, with flight plans, airport checks, special licenses), it would also be perfect for suicide terrorism too!

Here's a funny but insightful post I've found, hammering on the topic:

Listen to most discussions of flying cars on the privileged end of the geekoisie and you can count on hearing a very familiar sort of rhetoric endlessly rehashed. Flying cars first appeared in science fiction—everyone agrees with that—and now that we have really advanced technology, we ought to be able to make flying cars. QED! The thing that’s left out of most of these bursts of gizmocentric cheerleading is that we’ve had flying cars for more than a century now, we know exactly how well they work, and—ahem—that’s the reason nobody drives flying cars.

Let’s glance back at a little history, always the best response to this kind of futuristic cluelessness. The first actual flying car anyone seems to have built was the Curtiss Autoplane, which was designed and built by aviation pioneer Glen Curtiss and debuted at the Pan-American Aeronautical Exposition in 1917. It was cutting-edge technology for the time, with plastic windows and a cabin heater. It never went into production, since the resources it would have used got commandeered when the US entered the First World War a few months later, and by the time the war was over Curtiss apparently had second thoughts about his invention and put his considerable talents to other uses.

There were plenty of other inventors ready to step into the gap, though, and a steady stream of flying cars took to the roads and the skies in the years thereafter. The following are just a few of the examples. The Waterman Arrowbile on the left, invented by the delightfully named Waldo Waterman, took wing in 1937; it was a converted Studebaker car—a powerhouse back in the days when a 100-hp engine was a big deal. Five of them were built.

During the postwar technology boom in the US, Consolidated Vultee, one of the big aerospace firms of that time, built and tested the ConVairCar model 118 on the right in 1947, with an eye to the upper end of the consumer market; the inventor was Theodore Hall. There was only one experimental model built, and it flew precisely once.

The Aero-Car on the left had its first test flights in 1966. Designed by inventor Moulton Taylor, it was the most successful of the flying cars, and is apparently the only one of the older models that still exists in flyable condition. It was designed so that the wings and tail could be detached by one not particularly muscular person, and turned into a trailer that could be hauled behind the body for on-road use. Six were built.

Most recently, the Terrafugia on the right managed a test flight all of eight minutes long in 2009; the firm is still trying to make their creation meet FAA regulations, but the latest press releases insist stoutly that deliveries will begin in two years. If you’re interested, you can order one now for a mere US$196,000.00, cash up front, for delivery at some as yet undetermined point in the future.Any automotive engineer can tell you that there are certain things that make for good car design. Any aeronautical engineer can tell you that there are certain things that make for good aircraft design. It so happens that by and large, as a result of those pesky little annoyances called the laws of physics, the things that make a good car make a bad plane, and vice versa. To cite only one of many examples, a car engine needs torque to handle hills and provide traction at slow speeds, an airplane engine needs high speed to maximize propeller efficiency, and torque and speed are opposites: you can design your engine to have a lot of one and a little of the other or vice versa, or you can end up in the middle with inadequate torque for your wheels and inadequate speed for your propeller. There are dozens of such tradeoffs, and a flying car inevitably ends up stuck in the unsatisfactory middle.

Thus what you get with a flying car is a lousy car that’s also a lousy airplane, for a price so high that you could use the same money to buy a good car, a good airplane, and a really nice sailboat or two into the bargain. That’s why we don’t have flying cars. It’s not that nobody’s built one; it’s that people have been building them for more than a century and learning, or rather not learning, the obvious lesson taught by them. What’s more, as the meme above hints, the problems with flying cars won’t be fixed by one more round of technological advancement, or a hundred more rounds, because those problems are hardwired into the physical realities with which flying cars have to contend. One of the great unlearned lessons of our time is that a bad idea doesn’t become a good idea just because someone comes up with some new bit of technology to enable it.

When people insist that we’ll have flying cars sometime very soon, in other words, they’re more than a century behind the times. We’ve had flying cars since 1917. The reason that everybody isn’t zooming around on flying cars today isn’t that they don’t exist. The reason that everybody isn’t zooming around on flying cars today is that flying cars are a really dumb idea, for the same reason that it’s a really dumb idea to try to run a marathon and have hot sex at the same time.


The question is whether the problem is one of insufficient engineering optimization or whether it requires a step-function in technology that does not exist now. It appears to me that self-driving cars are of the former type, while flying cars are the latter.

Current-resolution lidar, cameras, and radar seem to provide sufficient sensor input. The costs are too high by a long shot, but that may just be a question of getting economies of scale established. Current PC graphics hardware has sufficient bandwidth to process those sensors. I don't think you can just throw current neural net training at the problem and get Type 5 autonomy out of it - there will be lots and lots of engineering hours in figuring out what to do with that sensor data - but that's just a problem of doing many man-years of straightforward work.

Flying cars don't have adequate power from a current-gen internal combustion engine running on petroleum, and especially not enough power from lithium-ion batteries and electric motors. If you could get a power source that provided an order of magnitude or two greater power density than the best of those technologies, flying cars would be viable. Until then, no amount of engineering hours will make it work.

> If you could get a power source that provided an order of magnitude or two greater power density than the best of those technologies, flying cars would be viable.

So essentially thousands of flying nuclear reactors piloted by average joes around the city.

That sounds really safe!

> To cite only one of many examples, a car engine needs torque to handle hills and provide traction at slow speeds, an airplane engine needs high speed to maximize propeller efficiency, and torque and speed are opposites

You can get around that by using an electric transmission. A turbine drives an alternator which drives 2 sets of electric motors one for the wheels and one for the propellors. As to the rest of the post it’s attacking a straw man. I don’t think people want a highway capable car that can also fly. If you can fly why drive on the highway?

A 50k to 100k VTL ‘flying car’ with maximum cruse speed of 80 MPH, maximum altitude of 10,000 feet, a range of 500 miles, room for 2+ people, and a cargo capacity of 1,000lb including people fits most people’s definitions of a flying car. Being able to move around on the ground at say 15 to 25MPH without giant spinning blades would also be a great feature.

Oddly enough I think we already have something close to flying motorcycles in autogyros, but the closest thing to a flying car is a vanilla small flying airplane and those run you 250k new.

PS: There is even something of a jet pack alternative https://www.youtube.com/watch?v=bpwd-T2Qvbk

Agreed on flying cars, but I wanted to point out a glaring inaccuracy in the opening that undercuts the author's broader argument:

"By 2000 or so that curve had flattened out in the usual way as PV cells became a mature technology, and almost two decades of further experience with them has sorted out what they can do and what they can’t."

In fact, PV prices have dropped DRAMATICALLY since 2000 (https://www.sciencedirect.com/science/article/abs/pii/S03014... looks at the different trends 1986-2001 and 2001-2012), as have the prices/performance of the energy storage systems needed to make them practical.

I agree that it's not a silver bullet that will solve the fossil fuel crisis all on its own (at least not in time), but it is in line with the broader improvement in renewable costs and efficiency.

>In fact, PV prices have dropped DRAMATICALLY since 2000 (https://www.sciencedirect.com/science/article/abs/pii/S03014.... looks at the different trends 1986-2001 and 2001-2012), as have the prices/performance of the energy storage systems needed to make them practical.

Can't open this, but the abstract also shares this:

"Market-stimulating policies were responsible for a large share of PV's cost decline"

This part is artificial though (subsidies, etc).

The subsidies created the scale and experience required to lower the costs; they're not included in the cost numbers. With current technology, they are cheaper without subsidies.

I read that in various places, but I'm still suspicious. There are lots of ways to hide subsidies (green tax cuts for example).

I wonder if it's possible to solve noise problem with flying cars. Maybe use active noise cancellation at the source?

No idea if it's even theoretically possible, but would be neat.

No, it's not physically possible to do any significant active cancellation of the noise caused by a rotor / propeller spinning through the air. The noise sources are too spread out (especially with multi-rotor designs) and cover too much of the audible range.

Blade shape can be tuned to an extent for minimizing noise but that also reduces efficiency (not much to spare). Larger, slower turning blades are also quieter but there are practical physical limits on size and weight.

Aren't flying cars going to be much noisier than surface cars?

Joby is using some low noise props that are very effective. Quieter than most of the ridiculous motorcycles and loud cars on the 101.

If we care about noise, we should be addressing the motorcycle industry. I can’t hear a C-130 flying overhead at 1000 feet when I am in my house, but I can hear motorcycles zipping by on the freeway behind my house.

Yes, that was my point. We will need some clever tech to make the noise bearable.

This is why I don't understand the sad conclusion of the story in the article. The driver knew that AutoPilot had issues with that stretch of the freeway and was able to repeat the issues on several occasions but still thought it best to be on his phone, while driving 71 mph, at the mercy of AutoPilot? Especially considering that he was an engineer who had expressed concern over this, it seems silly to me that he wouldn't just take over manually during that stretch... He didn't and now he's gone because of it. It's not worth the risk/reward.

We don't need to understand it. We just need to know that, even armed with all that knowledge, he still made the decision he did, and that we should not expect others with less knowledge of how software works to be any less human.

> We just need to know that, even armed with all that knowledge, he still made the decision he did, and that we should not expect others with less knowledge of how software works to be any less human.

This is maybe one of the most important lessons of the 20th and 21st centuries (at least thus far): knowledge does not automatically prevent us from errors in judgments nor does it necessarily protect us from misfortune.

Nor does it absolve others from liability.

My friends like to talk about how awesome it would be to time travel or know what the outcome of a future decision will be. Even when armed with knowledge from observed outcomes of near-identical setup scenarios, majority of the time they end up doing exactly what they would have normally done. Happens to me from time to time.

knowledge does not automatically prevent us from errors in judgments nor does it necessarily protect us from misfortune.

Relevant xkcd https://xkcd.com/795/

Not automatically, but it can help.

I was in this very situation at the top of Pikes Peak. A storm moved in and the park rangers closed the place and sent everybody away. They knew the statistics of being hit by lightning in that very place.

If anything I would expect less knowledgeable people to be more skeptical and cautious than somebody who has a vested interest and passion for technology. I can't imagine anybody other than a tech enthusiast who would maintain trust in a self driving car technology that previously attempted to veer them into a barrier.

100% agree with this. Most people I’ve talked to about this incident think the guy is an idiot; if you had found a car’s feature to be unsafe on a portion of your commute before, why on earth would you trust it with your and other’s lives? If you had a new fancy belaying device slip in the gym, you wouldn’t use it on your next big wall climb.

I also don’t get how everyone is forgiving him for being on his phone, in a construction zone no less. Reckless driving is reckless driving, being an Apple Engineer and Tesla owner doesn’t somehow negate that he was being a belligerent driver.

It's not about forgiving him for being an idiot. It's more about recognizing that there's more than just the one idiot on the road. To an approximation, pretty much everyone who has ever had a driver's license has been guilty of making an idiotic decision or two while behind the wheel.

And, Autopilot being a technology that, among other things, enables (or even encourages) idiotic behavior, there's real risk that placing too much blame on the driver's choices lulls us into an attitude that enables the next idiot to kill themselves and/or someone else.

I agree that everyone makes bad choices from time to time on the road. But as a society we put 100% of the blame on the driver if they aren’t doing everything in there power to prevent an accident. “Everyone makes mistakes” isn’t a get out of jail free card, booze clouds people’s judgement to the point where after enough drinks they’ll think they are good to drive, but that doesn’t change anything if they get behind the wheel, its still a DUI.

This guy was on his phone in a construction zone and crashed because of his lack of intervention, using a driving assist feature doesn’t somehow absolve him from being so preoccupied that his vehicle veered off the road and into a wall. Imagine if instead of him dying he had killed a construction worker; I have no doubt a jury would find him guilty of manslaughter. When you get into the driver’s seat of a car you are taking on responsibility for a death machine. I find it troubling that this conversation is happening at all, the blame should be put squarely on his shoulders.

Iff the car suddenly slammed the wheel to the side causing him to lose control or became unresponsive to his inputs, that would be another matter. But this could have been prevented if he wasn’t being grossly negligent of the risk he was partaking in behind the wheel.

An ounce of prevention is worth many, many pounds of assigning blame after the fact.

(Unless you're plaintiff's counsel, of course.)

This has been my takeaway from the whole situation. It says something about the exactly how safe this stuff needs to be for it to be considered reliable.

My impression is that it's already more reliable than humans. Not perfect, but it probably averages better than all the crashes humans are causing.

> it's already more reliable than humans

Only when comparing aggregate statistics. Not all humans are equally (un)safe drivers; insurance rates vary based on driving record for good reason.

Maybe we should keep it the way and use it as a culling method.

That’s the part I don’t get either. The only thing I can come up with is he wasn’t paying attention to the drive up to that point and didn’t notice he was in a problem area.

I have a Tesla and there are definitely problem areas. You learn them fairly quickly when you are taking the same route all the time and you’re trained to either turn off autopilot or at least be alert when going through those areas. Or maybe you test it out with your hands on the wheel ready to take over to see if they fixed the bug.

There were a couple spots on my normal driving routes where the car would inexplicably swerve. It happened one time in each spot and that was enough. Both those spots have been fixed since, but there’s no way I’d be on my phone not paying attention driving through there. I’m still cautious. There are two more spots where the car will brake to 45 mph on the highway and then speed back up after a few hundred feet. I am always on high alert around there and usually won’t even use autopilot in those areas.

> The only thing I can come up with is he wasn’t paying attention to the drive up to that point and didn’t notice he was in a problem area.

It's a well known phenomenon that the more you automate away routine tasks, the harder it is for the driver to stay alert and take over in non-routine situations. My understanding is that airplane pilots are specifically trained in strategies to avoid falling into that trap.

Tesla should pay for time spent in autopilot in these cases. It’s clearly not a finished product. That could incentivize attention paying as well.

> Or maybe you test it out with your hands on the wheel ready to take over to see if they fixed the bug.

Isn't holding the wheel always required now in Tesla's autopilot?

You have some flexibility to take your hands off the wheel for a bit. It nags you after a little bit and then gives you some more time to hold the wheel. If you keep your hands off for too long it will disengage and you can’t put autopilot back on until you put the car in park. You can satisfy the sensors by softly resting one hand on the wheel.

His phone was using data, that’s not proof he was using it and distracted. Spotify streaming a new playlist could be responsible for that.

> Recovered phone logs show that a strategy game called “Three Kingdoms” was “active during the driver’s trip to work,” the NTSB said. Investigators said the log data “does not provide enough information to ascertain” whether Huang “was holding the phone or how interactive he was,” though it said “most players have both hands on the phone to support the device and manipulate game actions.” Huang’s data usage was “consistent” with online game activity “about the time of the crash,” according to the NTSB.

Maybe he wasn't using his phone but he was distracted or maybe even asleep. Otherwise why let the car crash?

> ...but still thought it best to be on his phone, while driving 71 mph, at the mercy of AutoPilot?

> Records from an iPhone recovered from the crash site showed that Huang may have been using it before the accident. Records obtained from AT&T showed that data had been used while the vehicle was in motion, but the source of the transmissions couldn't be determined, the NTSB wrote. One transmission was less than one minute before the crash.

For all we know, that could mean he had spotify on.

Anyhow, "he was worried about it" is no reason to shift the blame to him.

I disagree. He was informed enough about the technology and aware enough of its limitations to write extensively about it. Tesla and all other assisted driving technologies all give the user a warning to stay alert and maintain control of the vehicle. He was clearly distracted somehow because he was relying on the technology.


> Recovered phone logs show that a strategy game called “Three Kingdoms” was “active during the driver’s trip to work,” the NTSB said. Investigators said the log data “does not provide enough information to ascertain” whether Huang “was holding the phone or how interactive he was,” though it said “most players have both hands on the phone to support the device and manipulate game actions.” Huang’s data usage was “consistent” with online game activity “about the time of the crash,” according to the NTSB.

He was an engineer - he probably wanted to try and "solve" the problem (isolate the car's behavior and see what inputs are causing the incorrect reaction) and engineer a solution. As an engineer, I can understand this drive. He probably felt he was on top of it because he knew of the bug.

I can understand that drive too. I cannot understand allowing a technology that you know to be unsafe take over completely so that I can play a game on my phone.

Why was Autopilot driving above the speed limit?

Or does Tesla allow you to set the speed?

It allows you to set the speed but only up to 8 mph above the posted speed limit (or something like that).

As far as I know my model 3 at least doesn't limit the auto-pilot speed other than the max which is 90MPH. It's just a scroll wheel, you can set it however you like.

I never used FSD but AP never let me go above 55mph in a 45mph zone when it was using traffic-aware driving.

Maybe he was fulfilling Apple's demands: either attend a meeting on the phone while driving, or wake up earlier.

(Definitely not singling out Apple here.. at IBM I had a coworker who was in an accident during a phone meeting- luckily non-fatal).

> Recovered phone logs show that a strategy game called “Three Kingdoms” was “active during the driver’s trip to work,” the NTSB said. Investigators said the log data “does not provide enough information to ascertain” whether Huang “was holding the phone or how interactive he was,” though it said “most players have both hands on the phone to support the device and manipulate game actions.” Huang’s data usage was “consistent” with online game activity “about the time of the crash,” according to the NTSB.

I didn't see that in the original article, but I do see it here:


Ah, I read the wp article yesterday and just assumed it was the same article being discussed here.

I have coworkers who regularly take meetings or text chat/email while driving. It makes me uncomfortable. I have tried hinting that it's a bad idea, but don't feel comfortable telling them directly (don't think it would do anything except harm our relationship due to me questioning their judgement) or reporting to HR (fear of retaliation, some of the coworkers are senior to me in the management chain). On the other hand my discomfort kind of makes me feel like a busybody. 99 times out of 100 nothing will happen and the 100th time will probably just be a fenderbender. Not sure what to do except continue feeling uncomfortable with it.

We had this PM who ran our meetings, when people phoned in from the road she would say, "please take your calls while your aren't driving" and boot them from the meeting. Now I do the same thing and you should to.

"Are you driving right now? We can reschedule to something more convenient for you? I don't want you to get pulled over."

You're way too cautious in your human interactions, which is terribly sad. But I understand it because of our business culture.

It was a mistake, people make them all the time. Mistakes in cars kill people, and you don't need to be driving a Tesla to see that.

Frankly the whole point of automated control is to reduce this kind of mistake fatality, and... I mean, it's working. This was a tragedy for sure, but it was also fully two years ago. These events haven't been recurring, it's likely the specific proximate causes have been addressed, and by all reckoning these systems (while still not flawless!) are at or above parity with alert human drivers in terms of safety.

Basically, I read the same facts you do and take the opposite conclusion. People make bad risk/reward decisions all the time, so we need to take them out of the loop to the extent practical.

I don't know where you got these papers but you clearly got an awful sampling and I don't think you're giving computer vision a fair evaluation.

Granted, we're not quite ready for self driving, but there's no question that the neural network subfield of ML has absolutely exploded in the last 5-10 years and is bursting with productionizable, application ready architectures that are already solving real world problems.

You sound like an NVIDIA salesperson trying to sell me on a $3000 Titan ;)

There is no doubt in my mind that an AI with billions of parameters will be excellent at memorizing stuff.

I also have no doubt that research activity has exploded, which might be related to the generous hardware grants being handed out...

But all that research has produced surprisingly little progress over algorithms from 2004 in the field of optical flow.

The papers I looked at were the top-ranked optical flow algorithms on Sintel and KITTY. So those were the AIs that work best in practice and better than 99% of the other AI solutions.

While it's not my area of expertise, I am a bit wary of contest results. It seems like an exercise in overfitting via multiple comparisons? Maybe some algorithms with a slightly lower rank are actually more robust?

If it's as bad as you say, it seems like a critical evaluation would be pretty interesting and advance the field.

I wonder how many solutions within the AI field could just be categorized as "Automation"

That's what caused the first "AI Winter": Rules-based "AI" engines became what we call "Business Rules". AI didn't go away - it just stopped being an over-valued set of if-then logic and slots (value-driven triggers) with a cool set of GUI tools to build rule networks.

Source: Used to work for an 80s-era "AI Company"

All of them. That's how AI works. Not by making smarter machines, but by destroying intelligence by smashing it into machine-digestible bits.

>There is no doubt in my mind that an AI with billions of parameters will be excellent at memorizing stuff

We're way past memorization. We're into interpolation and extrapolation in high D spaces with Bayesian parameters. Sentiment analysis and contextual searches - search by idea, not keyword. Heuristic decision making. Massively accelerated 3D modeling with 99% accuracy. Generative nets for text, music, scientific models...

Sorry, but you're behind the times, and that's ok - one of us will be proven right in the next 1-5 years. Based solely on the work we're doing at the startup I'm working for, we're on the cusp of an ML revolution. Time will tell, but personally I'm pretty excited. And don't worry, I'm not working in adtech or anything useless.

That said, the driving problem seems to be quite far from being solved, I agree though it is outside my expertise; but I think the primary issue is that this is an application where error must be unrealistically low, a constraint which does not apply to many other domains. You can get away with a couple percent of uncertainty when people's lives aren't on the line!

Would you be willing to link to some papers and cite some specific algos to play with? The above cited specific algos. What are the new versions of these named algorithms?

> Granted, we're not quite ready for self driving

And yet it's literally in cars on the road.

I'm not saying you're wrong because of that. I just wonder how far from "ready" we are, and how much of a gamble manufacturers are taking, and how much risk that presents for not just their customers, but everyone else their customers may drive near.

> And yet it's literally in cars on the road.

It is not. There is no real self-driving on the road, at least not in conventional vehicles. Teslas autopilot is basically a collection of conventional assistive systems that work under specific circumstances. Granted, the circumstances where it works are much broader than the ones defined by the competition, but for a practical use-case its still very restricted. Self-driving systems can be affected by very minor changes in lighting, weather and other circumstances. While Teslas stats on x Million miles driven under Autopilot are impressive, they do not show the real capabilities of the self-driving system. For example, you can only enable the Autopilot under specific circumstance for example while driving on an Autobahn with clear weather. Under circumstances with for example limited sight the Autopilot won't turn on or will hand over to the driver, simply because it would fail. Of course, this is for passenger security, but these are situations real self-driving vehicles need to handle. Other leading projects like Waymo also test the vehicles under ideal circumstances with clear weather etc.

We'll most likely see fully self-driving vehicles in the future, but this future is probably not as close as Tesla PR makes us think.

> There is no real self-driving on the road

Emphasis on real. There is definitely something that most people would refer to as "self driving" in cars on the road.

I'm not saying what is there is specifically good at what it does - I'm saying someone put it into use regardless of how fit for purpose it is.

> but this future is probably not as close as Tesla PR makes us think

Unless you're suggesting the PR team decided to make shit like "Summon" available to the public, then it's not just "PR spin".

> Emphasis on real. There is definitely something that most people would refer to as "self-driving" in cars on the road.

Then you'd have to define what a self-driving car actually means. At least for me, self-driving means level 4 upwards. Everything below I'd consider assisted driving.

> Unless you're suggesting the PR team decided to make shit like "Summon" available to the public, then it's not just "PR spin". As I said, this Smart Summon feature also only works under very specific circumstances with multiple restrictions (and from what I've seen on Twitter it received mixed feedback)

Just because the car manages to navigate a parking lot with 5km/h relatively reliable, that doesn't mean that at the end of the year it'll be able to drive the car with 150 km/h on the Autobahn.

Edit: Fixed my formatting

> Then you'd have to define what a self-driving car actually means.

I said "for most people". For most people I know, a car that will change lanes, navigate freeways and even exit freeways is "self driving". It may be completely unreliable but even a toaster that always burns the bread is called a toaster: no one says "you need to define what a toaster means to you".

> At least for me, self-driving means level 4 upwards.

I have literally zero clue what the first three "levels" are or what "level 4" means, and I'd wager 99% of people who buy cars wouldn't either.

> that doesn't mean that at the end of the year it'll be able to drive the car with 150 km/h on the Autobahn.

30 seconds found me a video on youtube of some British guy using autopilot at 150kph on a German autobahn last June.

Again: I'm not suggesting that it is a reliable "self driving car". I'm suggesting that it is sold and perceived as a car that can drive itself, in spite of a laundry list of caveats and restrictions.

Car engineers know what level 1-5 is. Level 1 and 2 are basic assist - cruise control and the like. Level 3 is the car drives but the driver monitors everything for issues the car can't detect. 4 and 5 are you can go to sleep, 4 means there are some situations the car will wake you up and after you get a coffee (ie there is no need for instant take over) you drive, while 5 is the car will drive anything.

>It may be completely unreliable but even a toaster that always burns the bread is called a toaster

This argument is leaning toward the ridiculous.

I think only you and Elon Musk consider a "greater than zero chance of making it to your destination without intervention" to be self-driving.

Musk has good reason -- he's been selling an expensive "full self driving" package for a couple years and in order to deliver he needs to redefine the term. He's already working hard on that.
stephenr 10 days ago [flagged]

And I think you're being ridiculously pedantic if you think that a list of caveats and asterisks in the fine print means that average Joe B. Motorist doesn't view the Autopilot/Summon/etc features as some degree of "self driving".

The weird thing is there seems to be a discrepancy between these publicized figures of millions of miles of auto-pilot on the roads, and the general feeling you get when you turn on the system yourself. I've used it on a Model 3 and it at least feels horribly insecure, the lines used to show detection of the curbs are far from stable and jitters around often, maybe it's more safe than it seems, but the feeling is I would absolutely not put my life in the hands of such a system... just looking at all the YouTube videos of enthusiasts driving around the countryside with autopilot and it's like watching a game of Russian roulette. Suddenly the car starts driving along the other side of the road or veer off into a house.. I would categorize it as a glorified lane-assist system, in its current state.

Even Tesla's marketing copy describes it that way, so I don't think you are to far off.

>Autopilot enables your car to steer, accelerate and brake automatically within its lane.

>Current Autopilot features require active driver supervision and do not make the vehicle autonomous.

I believe every car manufacturer has a disclaimer that the autopilot can only be used as an assist. That the driver needs to keep his eyes on the road, and ready to intervene at any given time.

Were not at the self-driving level of kicking back the seat and watching netflix on your phone yet.

I doubt we will ever get there; there will always be edge cases which are difficult for a computer to grasp. Faded lane marking, some non-self-driving car doing something totally unexpected, extreme weather conditions limiting visibility for the camera's etc.

> I believe every car manufacturer has a disclaimer that the autopilot can only be used as an assist. That the driver needs to keep his eyes on the road, and ready to intervene at any given time.

-This is the scariest bit, IMHO. Basically, autopilot is well enough developed to mostly work under normal conditions; humans aren't very good at staying alert for extended periods of time just monitoring something which mostly minds its own business.

Result being that the 'assist' runs the show until it suddenly veers off the road or into a concrete barrier, bicyclist, whatever. 'Driver' then blames autopilot; autopilot supplier blames driver, stating autopilot is just an aid, not a proper autopilot.

This is the worst of both worlds. Driver aids should either be just that - aids, in that they ease the cognitive burden, but still require you to pay attention and intervene at all times - or you shouldn't be a driver anymore, but a passenger. Today's 'It mostly works, except occasionally when it doesn't' is terrifying.

This "driver aid" model itself is starting to sound like a problem to me. You either have safe, autonomous driving or you don't.

A model where a driver is assumed to disengage attention, etc but then be expected to rengage in a fraction of a second to respond to an anomalous event is fundamentally at its core flawed I think. It's like asking a human to drive and not drive at the same time. Most driving laws assume a driver should be alert and at the wheel; this is what...? Assuming you're not alert and at the wheel?

As you're pointing out, this leads to a convenient out legally for the manufacturer, who can just say "you weren't using it correctly."

I fail to see the point of autopilot at all if you're supposed to be able to correct it at any instant in real-world driving conditions.

> I fail to see the point of autopilot at all if you're supposed to be able to correct it at any instant in real-world driving conditions.

-The cynic in me suggests we need autopilot as a testbed on the way to the holy grail of Level 5 autonomous vehicles.

The engineer in me fears that problem may be a tad too difficult to solve given existing infrastructure - that is, we'd probably need to retrofit all sorts of sensors and beacons and whatnot to roads in order to help the vehicles travelling on it.

Road sensors ain't gonna fix the long tail of L5. We can't even upkeep roads as is, like crash attenuators, which would have mitigated the fatality in OP article.

Also, highway lane splits are very dangerous in general. It's a concrete spear with 70mph cars whizzing right towards it. Around here, they just use barrels of material, sand I believe. Somebody crashes into one, they clean the wreck, and lug out some more sand barrels. Easy and quick.

It isn't the SOLE action for L5 to be feasible, but I believe it is a REQUIRED action. (Emphasis added not to insinuate you'd need it, but rather to show, well, my emphasis. :))

For the foreseeable future, there's simply too many variables outside autopilot manufacturers' control; I cannot see how car-borne sensors alone will be able to provide the level of confidence needed to do L5 safely.

Oh, and a mix of self-driving and bipedal, carbon-based-driven ones on the roads does not do anything to make it simpler, as those bipedal, carbon-based drivers tend to do unpredictable things every now and then. It'll probably be easier when (if) all cars are L5.

I see this stated often, that humans are unpredictable drivers. What's the proof that automated systems will be predictable? They too will be dealing with a huge number of variables, and trying to interpret things like intent etc.

Yes, automated systems will also do unpredictable things - the point I was (poorly, as it were) trying to make was that the mix of autopilots and humans are likely to create new problems; without being able to dig it out now, I remember a study which found that humans had problems interacting with autonomous vehicles as the latter never fudged their way through traffic like a human would - say, approaching a traffic light, noting it turned yellow - then coming to a hard stop, whereas a human driver would likely just scoot through the intersection on yellow. Result - autonomous vehicles got rear-ended much more frequently than normal ones.

So - humans need to adapt to new behaviour from other vehicles on the road.

When ALL vehicles are L5, though, they (hopefully) will all obey the same rules and be able to communicate intent and negotiate who goes where when /prior/ to occupying the same space at the same time...

I think that unless a single form of AI is dictated for all vehicles, we can't safely make the assumption that autonomous vehicles will obey the same rules. Hell, we can't even get computer to obey the same rules now, either programmatically or at a physic level.

-That is a very valid point.

And, of course, they should all obey the same rules (well, traffic regulations being one, but also how they handle the unexpected - it would be a tough sell for a manufacturer who rather damaged the vehicle than other objects in the vicinity in the event of pending collision if other manufacturers didn't follow suit...

Autonomous Mad Max-style vehicles probably isn't a good thing. :/

It's only a problem if you believe in driverless cars, then it becomes a Hard Problem: "it works in situations where it's irrelevant", but so does plain old not holding the wheel: look, it's self-driving!* (in ideal conditions)

Reminds me of Bart Simpson engaging cruise control assuming its something like an autopilot. Goes good for a little while haha.

Which is why most car companies long ago said they wanted to skip level 3 and go direct to level 4. With level 4 when the car can't drive it will stop and give the human plenty of time to take over.

Based on tesla's safety report [1] it's already less dangerous than letting humans drive (alone). The error rate of human drivers tends to be downplayed, while the perceived risks of automated driving is exaggerated, distorting the picture.

Yes, it's a hard problem, yes we are not nearly there and there is a lot of development/research to do. Yes, accidents will happen during the process. But humans suck at driving and kill themselves and other people daily. It's the least safe form of transportation we have.

[1] https://www.tesla.com/VehicleSafetyReport?redirect=no

The gross human fatal accident rate is ~7 accidents per billion miles in the US, including fatalities caused by incompetent or irresponsible drivers, and substantially lower in Europe. But humans drive a lot of miles

Based on Tesla's safety report, 'more than 1 billion' miles have been driven using autopilot. Given the small data sample and the fatalities already attributed to autopilot, I think we're some way from proving it's safer than letting drivers drive alone, never mind close to being a driver substitute.

Marginal Revolution just highlighted an interesting detailed look at US car (driver) fatalities. https://marginalrevolution.com/marginalrevolution/2020/02/wh...

>> After accounting for freeways (18%) and intersections and junctions (20%), we’re still left with more than 60% of drivers killed in automotive accidents left accounted for.

>> It turns out that drivers killed on rural roads with 2 lanes (i.e., one lane in each direction divided by a double yellow line) accounts for a staggering 38% of total mortality. This number would actually be higher, except to keep the three categories we have mutually exclusive, we backed out any intersection-related driver deaths on these roads and any killed on 2-lane rural roads that were classified as “freeway.”

>> In drivers killed on 2-lane rural roads, 50% involved a driver not wearing a seat belt. Close to 40% have alcohol in their system and nearly 90% of these drivers were over the legal limit of 0.08 g/dL.

I don't think people give enough attention to whether broad statistics actually apply to cases of interest. That's about 40% of all driver fatalities occurring on rural non-freeway roads, of which 35% (~14% overall) were legally driving drunk.

People compare various fatality rates associated with riding an airplane vs driving a car all the time, but I've never seen anyone point out that an incredibly simple mitigation you're probably already doing -- not driving on non-freeway rural roads -- lowers your risk of dying in a car accident by more than a third. And it gets even better if you're not driving drunk!

If you measure driving quality in terms of fatality rate, it is actually the case that almost everyone is better than average. A lot better than average. But public discussion completely misses this, because we prefer to aggregate unlike with unlike.

You’re committing a logical fallacy here. Avoiding driving on those roads is only a mitigation if the accident rate is highly disproportional to their usage.

If half of all driving occurs on highways and half doesn’t, and half of all accidents are on highways, then avoiding highways will have absolutely no effect on your accident rate.

It’s possible that driving on these roads leads to a disproportionate accident rate, but you haven’t actually said that.

True. I think there's plenty of non-statistical reason to believe you can reduce your risk of death by not being one of the 50% of drivers involved in accidents on those roads who weren't wearing a seat belt or ~35% who are over the drink drive limit though.

> You’re committing a logical fallacy here. Avoiding driving on those roads is only a mitigation if the accident rate is highly disproportional to their usage.

You're right in spirit. I actually addressed this in passing in the comment "an incredibly simple mitigation you're probably already doing". Rural roads carry less traffic than non-rural roads for the very obvious reason that most people don't live in rural areas. The disparity is documented: https://www.ncsl.org/research/transportation/traffic-safety-...

We can also note that freeway vehicle-miles (excluded from this rural roads statistic) are going to be an inflated share of driven miles precisely because the purpose of the freeway is to cover long distances.

But as to the specific number I provided ("more than a third"), you're on target in accusing me of a fallacy.

That report is comparing humans driving in all conditions vs autopilot driving in only the best conditions. Humans are deciding when it is safe enough to turn autopilot on. So no, it is not less dangerous.

That's not what the report is comparing at all. The report is comparing all vehicles driving in all conditions vs Teslas driving in all conditions (separate for with and without autopilot).

The numbers show that Teslas experience a lower crash rate than other vehicles. Granted, this can be to a number of reasons including the hypothesis that humans deciding to buy Teslas drive more carefully to begin with. And the numbers show that turning on autopilot further reduces crash rates.

This at least tells us that letting the vehicles with the automated driving and safety features on the road doesn't increase the risk for the driver and others, which was the original premise I responded to.

There's a million hidden variables here that could explain the difference:

- The mechanical state of the car (Teslas with autopilot tend to be new/newish vehicles, and thus in excellent mechanical shape)

- The age and competence of the driver - I'm guessing people who make enough to buy a Tesla are usually not senile 80 years olds or irresponsile 18 year olds

- Other security gizmos in Teslas that cheaper cars may lack

Overall, it would be more fair to compare against cars of similar age and at similar price point.

I think the tricky part is that at some level you want to be comparing counterfactuals. That is, accident rates of Teslas on autopilot with a driver of Tesla-driver abilities, in road conditions where the accidents occur, and so forth.

It kinda seems self evident that a car that drives you into a wall randomly is less safe than one that doesn't.

I grant that Teslas might be safer than eg a drunk driver, and so we might be better off replacing all cars with Teslas in some sense, but we'd also be better off if we replaced drunk drivers with sober ones. But would safe, competent drivers be safer, and would that be ethical? At that point are you penalizing safe competent drivers?

Drunk drivers in Teslas are actually interesting for me to think about, because I suspect they'd inappropriately disengage autopilot at nontrivial rates. I'm not sure what that says but it seems significant to me in thinking about issues. To me it maybe suggests autopilot should be used as a feature of last resort, like "turn this on if you're unable to drive safely and comply with driving laws." But then shouldn't you just not be behind the wheel, and find someone who can?

Beware of the No True Scotsman fallacy. A human who drove into a wall could not possibly have been a Safe, Competent Driver, could they? A True Safe, Competent Driver would never drive into a wall.

Unless you're serious about bringing the bar way up for getting a driver's license, I think it's fair to compare self-driving technology with real humans, including the unsafe and incompetent. In most of the world, even those caught red-handed driving drunk are eventually allowed to drive again.

And how much have Teslas driven in snowy fog in the mountains on autopilot?

What a surprise that Tesla's report would say that!

Have they released all the data to be analyzed by independent people?

Also autopilot only runs in the best of conditions. Are they comparing apples to apples?

>Based on tesla's safety report [1] it's already less dangerous than letting humans drive (alone)

You mean the company that has staked it's future on selling this technology claims the technology is better than any alternative?

This is aside from the fact that the NHTSA says the claim of "safest ever" is nonsense and that there is zero data in that PR blog post.

Is there any chance that tesla is lying with statistics?

A fun example, someone was selling some meat, he said it is 50% rabbit and 50% horse because he used 1 rabbit and 1 horse. The conclusion is when you read some statistics you want to find the actual data and find if statistics are used correctly, most of the time as in this case the people doing the statistics are manipulating you.

There was an article about a city in Norway with 0 deaths in 2019, if I would limit my statistics to that city only, to that year only I will get the number of 0 people killed by human drivers.

> I don't know where you got these papers but you clearly got an awful sampling and I don't think you're giving computer vision a fair evaluation.

I disagree, I saw such an awful number of bugs in ML codes going with papers that I now take for granted that there is a bug somewhere and hope that it does not impact the concept being evaluated.

(here having everyone use python, a language that will try its best to run what you throw at him, feels like a very bad idea)

There is 40-50% drop in precision with state of the art results if test images are ill-formed. The imagenet dataset used is far from ready for real world use cases. A bunch of IBM and MIT researchers are trying to fix this - https://objectnet.dev/

As in, "only works in great visibility on a perfectly spherical road"? That does seem an appropriate summary.

BTW, I would very much like to see progress in optical flow because I could really use it for one of my projects.

If you know any good paper that tries a novel approach and doesn't just recycle the old SSIM+pyramid loss, please post the title or DOI :)

> Granted, we're not quite ready for self driving

if it was the case self driving cars wouldn't be on the road, I don't think we should aim for perfection, perfection will come. We should be looking for cars that make less errors on average than humans, once you have that you can start putting cars on the road and use data from the fleet to correct remaining errors.

Humans have an average of 1 fatality per 100 million miles driven. No one is anywhere close to 100 million miles without intervention.

Are there any fully autonomous cars on public roads with no driver that can intervene? Seems like only maybe in tightly constrained situations are we ready.

I don't think I mentioned FULLY autonomous cars, my point was: something doesn't have to be perfect before we have to use it, but I probably didn't express myself correctly

I think that the necessity of intervening drivers atm indicates that we aren't at that point yet, even if that point is far from perfection, and also that the reason any self-driving cars are on the road is because of the fairly loose but significant requirements from regulation. We might be at that point in otherwise very dangerous situations, like if I was very tired or drunk, but otherwise I don't know that I'd have so much faith in software engineers to completely control my car.

I don't understand why if you've gone through this much trouble, you don't have a blog post or even a medium article to cover your findings. I'd be very curious to see the responses from those authors and other experts in the field about your findings.

There's just something dubious about how it seems like you consistently find mistakes and problems in these papers. I'd be stunned if there was any expert that wasn't aware of the shortcomings of using a kernel that's as small as 3x3.

You’ve summed up what threw me off about their comment quite well.

Another thing, just the time alone needed to evaluate technology the way they are talking about sounds quite staggering.

> That gave me the gut feeling that most people doing the research were lacking the necessary mathematical background.

I don't find this to be the case with most ML researchers. Is it possible you have misunderstood some of these papers? It is, after all, hard to jump straight into a new field.

> The second paper converted float to bool and then tried to use the gradient for training.

This sounds like binarized neural networks. If that's the case, they keep the activation before binarization to use for backpropegation.

> The third paper only used a 3x3 pixel neighborhood for learning long-distance moves.

A single layer of 3x3 convolutions would not be able to model long-distance moves. But I have not read a single paper where they have only used one layer. Is it possible they stacked multiple conv + pooling layers? The receptive field of each unit higher up in the stack grows pretty large in the end.

Yeah I have a similar feeling...but I am willing to try it. Just aware that there are probably corner cases it can't handle and generally curious about what the behavior is.

For instance, I was driving on autopilot on a section of 101 where they repainted the lanes. I let autopilot do its thing...but I closely observed and kept my hand on the wheel and foot on the brakes. Lo and behold...the car positioned itself right in the shoulder and was driving towards no-man's land. If someone wasn't paying attention in this situation, it would have been catastrophic.

Does it not feel a little crazy to you that you're willing to put your life into the hands of a machine that you know is capable of, and has already tried, killing you?

I can't think of any time in my life where I've almost driven my car into a no-man's land even after thousands of hours of driving, and yet every Tesla owner I know has half a dozen stories about the time Auto Pilot tried to steer them into a barrier or took a corner way too fast or suddenly braked for no reason.

I totally get the appeal of a Tesla, holy shit that acceleration is amazing, but Auto Pilot just does not seem worth it at all.

As a driver of many decades, and having survived quite a few of my own stupid mistakes, I think this is also bias. As long as I had some direct input into the situation, I saw such near-misses as the inherent risks of driving: I should have had the wheels geometry checked, I should have replaced the lightbulb, I shouldn't have driven on old tires, I should have checked the mirror, I shouldn't have overtaken there - and I continue to use cars, despite dozens, perhaps hundreds of near-misses, or even minor accidents: "yeah, we almost died today - but didn't, that's just a fact of driving, not a suicide attempt".

Whereas once you feel that all agency is stripped from you, it's all someone else's fault, especially if the mistakes feel alien (as in "no human would err in this specific manner").

This is an oversimplification of Tesla’s technology, and they don’t use Tensorflow, other people’s pre-trained models, and computer vision isn’t the only thing that is a part of driving, they use radar and ultrasonics as well.

I try new technologies all the time to get a sense of where it is in coming to fruition. I think what you mean is, you’re more skeptical of claims.

The same way you claim you can’t learn anything about NY in your bathroom, you don’t know anything about Tesla or Self-driving if you haven’t tried it. You should at least test drive it under controlled conditions where you feel comfortable before closing yourself off completely.

> You should at least test drive it under controlled conditions [...]

Who cares what the car does under controlled conditions? I'm sure the manufacturer did exactly that in their testing. Even when they test on public roads, there's a hands-off safety driver behind the wheel, who is paid to be on the lookout and sufficiently alert to take over in case of an unexpected excursion. (Unless the self-driving car under test is from Uber, in which case the safety driver simply watches video on their phone. Too soon?)

This is nowhere near how these cars are used in the real world. The real world is not a set of "controlled conditions", so any comfort one builds up in such a situation is merely a false sense of security.

> [...] where you feel comfortable before closing yourself off completely.

So, here's the thing: I'm comfortable driving myself. I don't get distracted, I use good judgment, I consistently prioritize the safety of my vehicle's occupants over everything else. I know exactly how flawed self-driving cars are, and how far behind the curve of my driving skill they will remain within my lifespan. That's the sum total of everything I need to know, and no amount of "controlled conditions" demos will change my mind.

P.S.: If you're from the future and you're reading this because I got mowed down by a self-driving car: ha ha! Joke's on me.

My comment about controlled conditions was about making you feel comfortable and give yourself a safe to try it out to get an understanding of when it. It wasn’t to say to believe in self driving, I agree it’s a long way out. Understanding the technology is more important than dismissing it altogether is what I was simply trying to point out. I think you can at the least try it, understand it, then have an opinion about it.. (which I respect). There seems to be a lot of negative comments from people who have never sat in a Tesla or have gone through a test drive.

I'm not OP, but even if a test drive went perfectly, I would remain worried about the chance of the car randomly killing me for some stupid edge case reason.

Maybe not today, not tommorrow, but maybe six months in the future when the weather and road conditions happen to be just precisely right to confuse the system at just the most dangerous time.

In the meantime I will just read/watch the stories of people more trusting than me about how well the technology works, and currently those stories don't fill me with confidence.

IMO, this is currently dangerous technology that should not be allowed on the road at all.

Common-sense tells me that these half-self-driving systems are dangerous.

I would like to see a study that tested the reaction times of a person who sits doing nothing for a hour and then is suddenly expected to take evasive maneuvers, versus a sober - or even a drunk - driver who is actually driving the car continuously.

Again, going back to: see if for yourself. Experience it.

Then have an opinion, otherwise it’s like reading about NYC and saying you hate it because you read the reviews.

I've seen the news reports and discussion about the catastrophic failures, and that's enough for me.

Of course I can have an opinion without going for a ride in one, and that opinion is that I don't trust it and I won't "experience it".

I've driven a Model 3 over the course of a few days, maybe a handful of hours in total, and based on that experience I absolutely do not feel comfortable using Auto Pilot and would not buy a Tesla at the moment.

It's far more janky and susceptible to confusion than Tesla makes it out to be in its marketing, and the reality is that people simply do not pay as much attention as they are required to when using it because Tesla has convinced them it's magic that's safer than anything else on the road.

Thank you for having actually tried it then having a real opinion about it.

I have a Tesla, I usually use autopilot for highway traffic only, summon it like a valet to where I am in my parking lot, and not have to idle in hot and cold weather.

I agree, I wouldn’t use it for local roads and unclear highways, but isn’t this what they tell you? I don’t think they ever tell you that it’s full self driving right now. Also, I’ve experienced it being janky but over time it’s improved dramatically.

>not have to idle in hot and cold weather.

As a car and efficiency enthusiast, I totally try to keep my gasoline powered car from idling unnecessarily.

But what does "idling" mean in the context of a 100% electric car?

You can have the car on with AC and/or Heaters without the engine on because there is no engine. In fact they have camper mode where you can camp out in the vehicle overnight with the ac on, and huge batteries allow you to do this without worries.

>summon it like a valet to where I am in my parking lot

It can drive by itself from where it's parked to where you are?

It sure can. I have a kid so putting him in the car next to another car in a parking garage is painful but with summon everyday it’s amazing

After looking into it a bit more, it seems useful in some circumstances as you describe, but hardly "summon it like a valet" when it can only move a few meters.

From your description, I was thinking more of something like waiting at the entrance to a public carpark and the car comes to you.

I design medical devices and have the exact same opinion. It's the same logic as avoiding the first model year of production when buying a used car.

I design industrial automation equipment and have much the same feelings about those devices. I've programmed robot cells, with redundant software and hardware safety systems. I've run through the checklists to make sure the operators are SIL-3 safe with a high-speed high-power robot running just inches away through a piece of 1/4" polycarbonate - there to keep the operators out, it won't keep the robot in. I know all the engineering that goes into those systems, and that's why I will never ride a Kuka-coaster (a chair bolted to a 6-axis robot).

Also, I'm quick to reach for a simple pneumatic cylinder to solve a problem. Perfectly capable of using new electric-servo-ballscrew-hotness to do a similar move, but the value provided by tried-and-tested systems is hard to overstate.

Papers code is probably optimized for "first to publish". Also overfit in a non-traditional sense since ppl wanta to beat SOTA by as much as possible. Also the heuristic tips an tricks and autotuning you'd want in a production models would exceed paper lenth 10x. Also the author is motivated to NOT provide a bug free easy to put in production version of the code since that would lower the $ value of their expertise. A cocktail of all the wrong incentives!

Probably the production versions of those models are suboptimal in different ways but work better in practice...

> I avoid new technology, exactly because I'm an engineer. > I wonder if that is just me

Can't find it now but there was a poignant quote or anecdote I read the other day that expresses this exact sentiment - the more you know about technology, the less likely you are to use it. I think it was in the context of e.g. smart homes and voice assistants or online tracking - if you're aware of how much data they hoover up and what can be done with that, you'd be Very Afraid.


Tech Enthusiasts: Everything in my house is wired to the Internet of Things! I control it all from my smartphone! My smart-house is bluetooth enabled and I can give it voice commands via alexa! I love the future!

Programmers / Engineers: The most recent piece of technology I own is a printer from 2004 and I keep a loaded gun ready to shoot it if it ever makes an unexpected noise.

https://biggaybunny.tumblr.com/post/166787080920/tech-enthus... (via foxrob92)

It would be nice if that sort of awareness was more prevalent on HN.

The cult of cheer, eyes waxing over from a disappointed jet-age generation, and a little shiny syndrome.

"I avoid new technology, exactly because I'm an engineer."

I would never call myself an engineer, although I have worked in environments where there were lots of engineers (not in the software world) and one definition of "engineering" that I heard was:

"Meeting the requirements while doing as little new as possible"

That's a very accurate definition in the mechanical world. Every junior engineer with an overly complicated, cutting edge, unproven idea gets shot down pretty quickly. In our shipyard safety is more important than anything else. After that is cost.

I would almost agree with you, but please indulge with me in the following thought.

Let's say that you can go to work by feet or by bike.

Let's also say that by feet your probability of getting killed are 1 in 1,000,000, and the commute takes 20 minutes.

On a bike, commuting takes only 10 minutes, but the probability is up to 1 in 200,000. Five times more.

You end up deciding to use the bike every day to go to work.

In this example, you decided to trade comfort (faster commute) with a slightly higher probability to end up dead.

Imagine now you need to decide whether to commute in your Tesla with or without autopilot.

Let's assume (I might be wrong) that Tesla's autopilot increases your chances of getting killed. (for simplicity, let's ignore the consequences for other people on the road).

Would you still trade comfort (not needing to drive) with a slightly higher probability to die?

These sorts of analyses get complicated fast. Walking is riskier in part because you're more exposed to the "killer self-driving cars", but also people with preconditions will walk. Like, if you're drunk you'll walk home, if you're unfit you're more likely to walk than cycle, etc., people walking probably get mugged more often, ...

But OP is making the opposite assumption here - that cycling is riskier. Which, keeping all other factors constant, seems intuitively correct to me since you go at a higher speed and you are on the road with cars rather than on the sidewalk.

As you note, you're coming from impossible premises ("assuming your actions never affect anyone else"). Do you expect to come to meaningful conclusions?

In other words, as a buyer, I do care about the occupant safety rating, you are correct in that. As a road user, I also care that other people's cars consider me as irrelevant, my potential death an externality to be amortized in the purchase prize.

Yes. Current self-driving might be safer than the avarage driver, but what most people forget is how unsafe it is when it fails. It fails harder than most avarage drivers and even bad drivers.

In my eyes self-driving should still be called driving-assistance.

> Current self-driving might be safer than the avarage driver,

So far there isn't any evidence for this assumption.

If you compare the number of accident per km, I'm pretty sure the numbers for self-driving are much lower than human drivers.

When a self-driving car is involved in an accident, it's in the news all over the world.

Human drivers kill or get killed every day, in every country.

I think the statistics to compare self-driving miles vs human driven miles are quite tough to judge.

Tesla was criticized quite a bit at one point for comparing deaths per Autopilot mile to deaths per all motor vehicle miles. This was a bad comparison because motor vehicles included motorcycles, as well as older, poorly-maintained cars, etc.

Then Tesla released a comparison between Autopilot miles in Teslas and human-driven miles in Teslas where Autopilot was eligible to be engaged. This felt like a much more fair comparison, but Teslas are lenient about where Autopilot can be engaged - just because the car will allow it doesn't mean many people would choose to do so in that location, so there might be some bias towards "easier" locations where Autopilot is actually engaged. There's also the potential issue of Autopilot disengaging, and then being in an accident shortly afterwards.

This is morbid, but I also wonder about the number of suicides by car that are included in the overall auto fatality statistics. If someone has decided to end their life, a car might be the most convenient way (and it might appear accidental after the fact). That would drive up the deaths-per-mile stat for human drivers, but makes it tougher for me to decide which is safer - Autopilot driving or me driving?

Humans have 1 fatality per 100 million miles, self driving is nowhere close to this.

The statistics for SDVs have an issue which might invalidate them: "fatality per miles driven" doesn't take disengagements into account: how do you even do that, meaningfully? "Fatality avoided because human driver stepped in - another triumph for autonomous driving"? That doesn't make much sense...

It makes sense in that, even in the real world, the real humans can be counted on to pay attention and intervene to some extent. That's an important counterpoint to the vivid thought experiment of "how could you possibly expect someone to pay attention after hundreds of hours of flawless machine operation?"

Unless you're talking about getting rid of the steering wheel and deploying the current system as Level 5. In that case, yes, interventions should count against it.

I’m unconvinced by all of the AI hype. But I will say that just last week, a human-driven car snapped a pillar outside of my office and bent a bike rack into a mangled mess. The driver was unconscious, with a drug needle and empty beer can on the passenger seat.

So, the hard failure for humans is pretty bad, too, just different. I suspect there’s little overlap on a Venn diagram of the hard failure modes for AI and humans.

> Current self-driving might be safer than the avarage driver

Citation needed, as far as I know this is not true at all.

Obviously take with a grain of salt give the source but.. https://www.tesla.com/VehicleSafetyReport

"In the 4th quarter, we registered one accident for every 3.07 million miles driven in which drivers had Autopilot engaged. For those driving without Autopilot but with our active safety features, we registered one accident for every 2.10 million miles driven. For those driving without Autopilot and without our active safety features, we registered one accident for every 1.64 million miles driven. By comparison, NHTSA’s most recent data shows that in the United States there is an automobile crash every 479,000 miles."

Yes, people disengage auto-pilot (or it does so itself), but it is at least plausible to say that in this like-for-like comparison of drivers and vehicles self-driving is at least comparable in safety.

I don't think it is at all reasonable to assume this. The autopilot disengagement is a major driver of "tesla autopilot safety".

I don't know what disengagement rate is for Tesla, but we do know that early 2019 waymo it was like roughly every 11k miles driven[0]. Given that waymo uses lidar/generally considered to be closer to actual autonomy than Tesla this speaks poorly of actual autopilot safety.

[0] - https://9to5google.com/2019/02/13/waymo-self-driving-disenga...

So when I see multi-car pileups on the motorway with cars that are a torn up husk (actually I've seen totalled cars just on inner city junctions where the speed limit is supposedly 50 kph), that's failing less hard than self-driving?

Can you cite any sources that AI fails harder? I'm not exactly sure what you are referring to.

The dead Uber pedestrian where the AI dismissed the data as faulty and did not slow down at all.

I think every human would slow down if they see things that they cannot explain. An AI will not.

It's basically the same problem as when an image recognition AI is 99% sure that the photo of your cat shows guacamole.

Current AIs do not have a concept of absolute confidence, they only produce an estimate relative to the other possibilities that were available during training. That's why fully novel situations produce completely random results.

Some humans drive blackout drunk. You overestimate our competence.

> dead Uber pedestrian

Elaine Herzberg was in dark clothing crossing a dark street well away from any crosswalk or street lighting. Would a human driver have performed better? From the footage I saw she was nearly invisible, I would have hit her too.


This was not a hard fail for the AI.

Nice story...if only it would match the evidence. The algorithm has detected the person 6 seconds before the crash, but as it didn't match any of the object classes conclusively, it took no action. You read that right: "there's something at 12 o'clock. Bike? Person? Unknown? Who cares, proceed through it!" If that's not a hard fail, IDK what would.

That's not what happened. The AI was not programmed to drive through anything. It was incorrectly programmed to dump previous data when the categorization changed. It correctly identified at each point that it was meant to slow down and/or stop but, by the time it determined what the obstacle was, the previous data was thrown out and it didn't have enough time to stop properly. In your example, it was more like "There's something that 12 o'clock. Bike? Person? Unknown? Stop!!" just before actually hitting the person.

The car did "see" an obstacle for over 6 seconds and did not brake for it, now someone is dead. You are haggling over semantics to make it look like this did not happen and/or this is not a bug. Atrocious.

(Or, more charitably, "oops, somebody forgot that object persistence is a thing" does not excuse the result)

What? That's not at all what I'm doing and you're being extremely disingenuous to suggest that. I'm simply correcting misinformation. The car wasn't programmed to drive through anything. It was programmed to throw away information. Either way, it's an atrocious mistake and I've even said, elsewhere in these comments, that the people responsible for that code should be held liable for criminal negligence. There's no need to lie about my point or my position to defend yourself. That's just silly.

I have misunderstood you then, and I apologize.

Then I forgive you and I'm glad we see eye-to-eye on this. Everyone should be appalled at Uber's role in this and their response along with the lack of repercussions for them.

The video released by Uber was extremely misleading. Here is a video on YouTube of the same stretch of road taken by someone with an ordinary phone camera shortly after Elaine’s death: https://www.youtube.com/watch?v=CRW0q8i3u6E

It’s clear that a) the road is well lit and b) visibility is far, far better than the Uber video would suggest.

An ordinary human driver would have seen Elaine & taken evasive action. This death is absolutely Uber’s responsibility.

> An ordinary human driver would have seen Elaine & taken evasive action.

Looks like this was a hard fail for the AI then. We can say with better than 90% certainty that a human would have saved the situation, probably would have stopped or avoided easily. My mistake.

"May have seen" is more appropriate as every day pedestrian get killed on well lit road by human drivers.

Which is also true. This is perhaps the underlying issue: "we expect cars to be safe, while also expecting driving fast in inherently unsafe conditions." In other words, the actual driving risk appetite is atrocious, but nobody's willing to admit it when human drivers are in the equation. SDVs are disruptive to this open secret.

The assumption was for an ordinary driver, the expectation is that given sufficient lighting the vast majority of drivers would see and avoid a pedestrian. Most of the millions of pedestrian vehicle interactions daily go by without incident, one or the other party giving way, so this would be the normal expectation for an ordinary driver.

We can reasonably assume that pja is aware of the existence of abysmal drivers and fatal crashes that should not have happened. I doubt their intent was for "would" to be interpreted as "100%".

LIDAR would have picked that up dead easily.

Just like LIDAR would have picked up https://www.extremetech.com/extreme/297901-ntsb-autopilot-de...

And just like LIDAR would have picked up https://youtu.be/-2ml6sjk_8c?t=17

And just like LIDAR would have picked up https://youtu.be/fKyUqZDYwrU

And just like LIDAR would have picked up https://www.bbc.co.uk/news/technology-50713716

These accidents are 100% due to the decision to use a janky vision system to avoid spending $2000 on lidar; and that janky vision system failing.

"Brad Templeton, who provided consulting for autonomous driving competitor Waymo, noted the car was equipped with advanced sensors, including radar and LiDAR, which would not have been affected by the darkness."


The car had LIDAR.

Yep, and it detected Herzberg in the roadway with plenty of time to spare.

"the car detected the pedestrian as early as 6 seconds before the crash" [...] "Uber told the NTSB that “emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior,” in other words, to ensure a smooth ride. “The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.”" [1]

'Oh well it was very dark' is not a factor in the crash that killed Herzberg

[1] https://techcrunch.com/2018/05/24/uber-in-fatal-crash-detect...

It had lidar and ignored an unknown object it was tracking. If that's not damning for the whole field, I don't know what is.

The car had LIDAR. It wasn't an equipment failure, it was a failure on the part of the programmers. They had programmed in a hard delay to prevent erratic breaking and the system was programmed to dump data whenever an object was re-categorized. The system detected the person in the road as an unknown obstruction and properly detected that it was moving into the road but it re-categorized that obstruction 4 times before correctly identifying it as a person. By that point, the velocity and LIDAR data had been dumped because of the re-classifications and the car only had <2 seconds to stop.

Seriously, that is hopeless. It is worse than I'd expect from a university project.

It’s $75,000 USD for the LIDAR sensor, a far cry from $2,000.

I think that was the price for Velodyne's 64 laser LIDAR. They've discontinued it and now the top of the line is the Alpha Prime VLS-128 which has 128 lasers and is ~$100K.

There are many other cheaper LIDARs, even in the Velodyne lineup, but they are less capable.

The exposure time and the dynamic range of the sensor affects the visibility of the person in that video - it is very likely that a non-distracted human would have performed better.

The vehicle was equipped with both radar and lidar. The victim was detected as an unknown object six seconds prior to impact, and at 1.3 seconds prior to impact when the victim entered the road the system determined that emergency braking by the driver was required, however the (distracted) driver was not alerted of this.

> the system determined that emergency braking by the driver was required, however the (distracted) driver was not alerted of this.

Why would the system notify the driver that emergency braking was required instead of simply braking?

"the nascent computerized braking technology was disabled the day of the crash"


The footage shows how car's cameras saw the accident. Human eyes have much greater dynamic range than that, so I wouldn't assume that a human driver would not perform better. Something appearing pitch black on this footage could be well recognisable to you. Also, this car's lidar also failed to recognise the pedestrian so if this isn't an AI fail then I don't know what is.

This isn't one of the actual driving cameras, this is a shitty dashcam with variable exposure, with the footage then heavily compressed. This is not used at all for the self driving.

Just adding this if you think we should somehow give Uber the benefit of the doubt here. They released footage from a pinhole dashcam sensor that is not used by the system, knowing fully well it would be pitch black and send the ignorant masses into a "she came out of nowhere!" chant.

This is absolutely a fail, a woman died. The question is whether or not this incident is an example of "fails harder than most average drivers" or a hard fail.

With the sensor array available to it the car should have done better, no question.

But to make the claim "fails harder" I would be looking for a clear cut situation where a human would almost definitely have outperformed the AI.

Human eyes do have miraculous dynamic range so we would likely see more. Can we say with 90% certainty that a human would have saved the situation?

Can we say with 90% certainty that a human would have saved the situation?

Yes, given the misleading nature of the dashcam video I think we can. This was not a pitch dark road lit only by headlights where an obstacle "appeared out of no-where". This was a well-lit main street, with good visibility in all directions. An ordinary human driver would have had no problem identifying Elaine as a hazard and taking the appropriate avoiding action, which was simply to slow down sufficiently to allow her to cross the road.

The backup driver in the car was apparently looking at their phone or some other device and not watching the road at the time.

Based on the evidence you have put forth your conclusion is logical and reasonable. You have convinced me that my statement was in error.

> The backup driver in the car was apparently looking at their phone or some other device and not watching the road at the time.

"According to Uber, the developmental self-driving system relies on an attentive operator to intervene if the system fails to perform appropriately during testing. In addition, the operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review"

She was looking at a device, yes, but not her phone.

Uber put one person on a two person job, with predictable results.

After the crash, police obtained search warrants for Vasquez's cellphones as well as records from the video streaming services Netflix, YouTube, and Hulu. The investigation concluded that because the data showed she was streaming The Voice over Hulu at the time of the collision, and the driver-facing camera in the Volvo showed "her face appears to react and show a smirk or laugh at various points during the time she is looking down", Vasquez may have been distracted from her primary job of monitoring road and vehicle conditions. Tempe police concluded the crash was "entirely avoidable" and faulted Vasquez for her "disregard for assigned job function to intervene in a hazardous situation".


Fair enough, you're right, she was likely looking at her phone.

The rest of my point still stands though.

"There's something out in front of me, an unclear shape right in my path, relatively far, 6 seconds out. I will drive straight through it instead of slowing down, because...[fill in your Voight-Kampff test response]". Well? Is that at least 90% human?

Moreover, try this dashcam video: https://youtu.be/typj1asf1EM It's 10 seconds long, and makes the pedestrian look almost invisible except for the soles.

However, when I took that video, both the crossing pedestrians were clearly visible, not vague shapes that you only notice when you're told they exist. So much for video feed fidelity.

Not classifying a fire truck stopped on the highway, as something you need to avoid, is a good example.


Well for example all the YouTube videos that show how Tesla drivers take over the wheel in case of a failure.

Those failures include driving straight into barriers.

Other systems like OpenPilot show the same.

When it fails you better take control of the wheel or you will crash hard.

For clarity, I personally mistrust AI driving and wouldn't be comfortable using it, but the question I have is more along the lines of - if you take the incidence of serious driver error (e.g texting and crashing, speeding and sliding off a road, falling asleep etc) does that happen less often than autopilot going nutso for the demographic driving it? Failing hard seems very possible for both, so stats backing up that AI fails hard regularly seem applicable.

I believe he's alluding to the nature of the accidents. They're high intensity events which are more likely to be fatal (speeds were ~70mph). They're not fender benders when the autopilot fails.

But isn't that what autopilot is used for? High speed Highway traffic? I don't trust AI cars yet but I'd like to know my instinct is true on this and not just my natural inclination to avoid unfamiliar tech.

These are high risk areas, if autopilot is "failing hard" with a regularity equal to or higher than normal than this would be good to demonstrate with stats. Guessing Tesla doesn't really release that info?

Auto-pilot that drives above the speed limit ought to indicate to you its scope.

Still seems like people treat auto-pilot like auto-drive and die as a result.

Thats not a tech fail imho

Why on earth am I getting downed so hard? It was a genuine question.

It's not. I got my pilot's license and fly in a technically advanced aircraft, and all that it means is that there's quite a bit of automation there to help you out. The lessons imbued in it, the issues you learn exist, actually using it in very critical phases of flight, etc., builds an appreciation for both the wonders and dangers of automation.

Going through that experience has 1000% made me more weary of autonomous vehicles.

I love this take!

I’ve been similarly untrusting of a lot of “high tech” approaches to various things, and I derive a lot of joy from products/services/etc that take a “back to basics” or at least minimally- or non-digital philosophy. In particular: I have an affinity for automatic watches and carbureted motorcycles.

If nothing else, it’s a bit of a break from what feels like a constant struggle to keep all the gears turning at work.

BUT... I’ll confess I’m also a sucker for innovations / the occasional new hotness. I recently upgraded a Kawasaki KLR650 (a competent but... “well tested”, shall we say? motorcycle) in favor of KTM’s top-of-the-line adventure bike. The technology difference between the two (despite only 3 model years between them) is incredible: the latter adds 5x the power, ride-by-wire, cruise control, lean-sensitive ABS/traction control, an up/down-capable quickshifter, probably a thousand other improvements.

One day, about 1100 miles into owning the new bike, the dash pops up a low tire pressure warning from the tire pressure monitoring system. It showed the rear pressure was fairly low, and sure enough, I’d picked up a small screw between the treads.

Certainly a TPMS is nothing compared to anything self-driving, but honestly it was a bit of a wake-up call — I WANT systems on my bike to increase my safety level.

I’m not really sure what the lesson is here. Maybe “Look for the middle grounds (the ABS/TPMS-maturity systems) between ancient technology (anything on my beloved KLR) and bleeding edge (non-replicable papers on self-driving cars)”? Seems like this holds up ok, especially as a consumer of those techs... But maybe not for the innovators?

This is not new technology at all but perhaps this is technology which is being used in an unnecessarily complex manner. I sat in on a lecture given by University of Maryland circa 2000 here in Florida concerning their W4 video surveillance system which could tell the difference between different objects quite well. It did this on a 400Mhz dual-pentium machine and could even recognize different signs at driving speed. This was before TensorFlow or Python really took off or GPUs where powerful and did not require a supercomputer to operate or a cloud either it was just Math:

https://dl.acm.org/doi/10.1109/34.868683 http://citeseerx.ist.psu.edu/viewdoc/download?doi=

> So now I'm afraid to use anything self-driving ^_^

I'm in the same boat but unfortunately you have to share the road with these things so it's hard to completely avoid them. I do find myself avoiding Tesla's on the freeway more and more.

I started avoiding Teslas on the freeway after watching the video of this guy putting on his makeup and shooting a youtube video while on autopilot: https://www.youtube.com/watch?v=cT_rEa4X1nA

Just curious: do you also avoid Subarus, Seats, BMWs, Audis and the other cars with adaptive cruise control? Or is it only Tesla drivers you consider this dangerous?

It's called Tesla autopilot. Average user is more likely to do silly things in cars that are advertised to have an autopilot rather than adaptive cruise control.

ML hype spawned a lot of self driving car hype, and a lot of promises which can't be delivered in any short time frame. It started with Comma.ai, but others picked up the full ML torch and went with it. Other companies are making steady, sure progress using the old school robotics approach of throwing sensors at the problem, Waymo, for example. Others are being reckless and trying to use vision alone (Tesla). It's a mess.

However, safe self driving is coming, slowly but surely (I work at a company which produces tech for these guys). The hyped companies are in trouble, but car OEM's, partnered with companies you've never heard of, are making slow, steady progress, all the while being subject to government functional safety requirements, particularly in the US and EU. There is zero chance that a Tesla or GM car will be allowed to fully self drive, so no matter how advanced these systems are, they're sold as level 2 systems requiring driver oversight and qualify as cruise control in regulations.

Today, we have full autonomy in some truck routes (only as proofs of concept), in ship yards, parking shuttles, mining equipment, quarry trucks, etc, places where the problem domain is more constrained. Generalized self driving is a ways off, but by the time you can make use of it, it will be safe, it just won't come from Tesla or Cruise or Uber.

Same here, always get those puzzled looks from non-technological friends when I am so picky about new tech.

Response makes sense, I've never thought about it before but I don't like gadgets at all (EE and programmer for 25 years). I like simple things, don't have home assistant, don't have tablet anymore, rarely use computer at home, like simple things in my musical instrument and tv setup, no games consoles (used to love games as a kid but not anymore). It's less to worry about and I feel I get more done.

I've got a beautiful 2001 Mercedes-Benz ... it drives well, avoids all the trendy stuff but still has too many electronic components (most recently I had to replace the computer board that decides whether you should be able to shift). My daughter has a 1971 Super-Beetle she's stored in my garage ... when it breaks, it's something mechanical!

I've been thinking for reasons to avoid Kickstarter and Indiegogo (as I got scammed on Kickstarter, and am not happy with the overall quality of several succeeded projects), and you gave me inspiration. Thank you.

I mostly agree with you that there are many false positives in the research papers. Still, you shouldn't outright dismiss the possibility of you not implementing their models correctly.

I was reviewing the authors' own source codes.

"As a result, I have become the late adopter among my group of friends because the first iteration of any new technology usually just isn't worth the issues." This is only true for things where reliability matters more than the new features the new tech provides. In the case of cars, of course, I'd prefer reliability.

But for other things like the original iPhone, sometimes new tech is just better than whats out their even if there will always been some flaws in the first versions

It’s been a while for me, but I’m pretty sure there are techniques for approximating gradients for step functions, although the paper may not have mentioned them (perhaps for secret sauce)

In the case I evaluated, the authors later admitted to using supervised training for initializing their supposedly unsupervised learning methods.

Also, I was reviewing their public source code release and there was no approximation. That part of their loss function had simply never worked, but they had not noticed prior to the release of their paper.

And due to them training slightly different from what they described in the paper, the AI worked competitively well nonetheless.

I agree with all but one of your points, and they're all well made.

However, by saying "AI is stochastical gradient descent optimization" you're equating AI with Machine Learning/Deep Learning.

The list of AI technologies also includes things like Artificial Life, Genetic Algorithms, Biological Systems Modeling, and Semantic Reasoning.

I suspect that when we get true AI, i.e. Strong AI or Artificial General Intelligence, we will achieve it through a combination of these techniques made to work together.

>I avoid new technology, exactly because I'm an engineer.

There is a saying where all software (security) engineers dont have IOT or any "Smart" devices in their home.

Plus... what’s the point? Maybe we automate a couple jobs for long haul shipping or road trips but what a fruitless effort!

Why don’t we work on drones that pick us up and take us places to really leap ahead, get out of traffic, and do something amazing?

Cars driving themselves? How incremental.

> That gave me the gut feeling that most people doing the research were lacking the necessary mathematical background. AI is stochastical gradient descent optimization, after all.

ML is only one part of 'AI'

AI is not stochastical gradient descent optimization; neural nets are; but AI is bigger than that - planning, deduction, game theory and so on. You can find out buy buying and reading a text book.

It should probably be called "CoPilot" and issue significant alerts as to when it has a poor understanding of the situation so the driver is aware to at least be more alert.

This resonates with me, and of course the classic xkcd on electronic voting https://xkcd.com/2030/

The difference is that electronic voting is a bad idea in principle, regardless of the implementation. Fully self-driving cars might actually be possible, but probably not with current software and hardware.

Do you mind explaining why it is a bad idea in principle? It feels like if we have a decent implementation (a big ask, to be sure), it would be safer and more convenient than the actual way.

Voting systems have a number of key requirements. To prevent bribes or coercion, the vote has to be anonymous and the voter must not be able to prove his vote. On the other hand, it must be possible to verify that each cast vote has been counted correctly. Finally, the whole process should be transparent and understandable for every interested voter.

These requirements can be easily fulfilled with a well designed paper ballot system. I don't see any chance of doing the same with anything computerized.

> To prevent bribes or coercion, the vote has to be anonymous and the voter must not be able to prove his vote.

But we don't apply this rule to the votes where bribery and coercion are most practical to start with, where there are a small number of voters that can be intensely targeted, and swinging a small number is sufficient to decide major outcomes.

I think it should be emphasized this accident occurred in May 2018.

The biggest tech thing I am afraid of that all other people are excited about is digital election. ML field is pretty clear about its limitation.

"I felt that way looking at state of the art AI vision papers" - which papars you looked at?

Optimistically; maybe everybody is hiding their 'secret sauce'. But I expect you're probably right.

I was optimistic like that in university. Great times :)

But then I went into consulting and saw that big companies have teams of lawyers that settle proactively out of court to keep inconvenient truths out of the public opinion.

Like the first few exploding iPhones.

(Just an example, never worked there)

BTW, did you hear about the Uber crash where their AI couldn't track a pedestrian and then killed her?

That’s an unfair characterization if you’re talking about the crash in Arizona. Perhaps: “Did you hear about the Uber crash where the AI couldn’t track a pedestrian jaywalking at night across the middle of a nearly pitch black two lane highway and then killed her?” would have been more apt. https://youtu.be/ufNNuafuU7M

That being said, I’m a huge skeptic of the current state of self driving cars. I would have assumed these systems use LIDAR as well as vision and could have at least slammed on the brakes.

Police concluded that given the same conditions, Herzberg would have been visible to 85% of motorists at a distance of 143 feet (44 m), 5.7 seconds before the car struck Herzberg.

A vehicle traveling 43 mph (69 km/h) can generally stop within 89 feet (27 m) once the brakes are applied.

The police explanation of Herzberg's path meant she had already crossed two lanes of traffic before she was struck by the autonomous vehicle.


I was also misled by the poorly exposed "official" video. Given the numbers above there was time for a human driver to see her and even come to a complete stop. Further since she was moving from one side of the road to the other and only entered directly into the vehicle's path in the last 1.3 seconds (image in "Software issues" section of wikipedia article) it is likely that all that would have been needed to avoid the collision would have been a minor slow down and she would have completed her crossing safely.

I hate that people attribute that accident to a visual issue because of that video. It wasn't a visual issue. It was 100% a programming issue and everyone involved should be criminally liable for negligence, IMO.

It only looks pitch black in Uber's badly-exposed dashcam video. See https://news.ycombinator.com/item?id=22307013

oh wow, that is certainly different than the news video which I assumed was relatively unbiased

A more precise narrative, as shown in the evidence submitted by Uber to NTSB (as opposed to the PR video), is "AI saw something for 6 seconds, but because it couldn't decide what it is, it drove through. Turned out to be a human."

"I have become the late adopter".

Exactly my thoughts when reading about the blended wing aeroplane, yesterday.

Many opinions, but little data. It's hard to assess anything under this circumstances.

Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact