I wonder if that is just me, but when new technology is introduced and hyped, I usually take a quick look at implementations, research, and talks just to get an idea of what the state of the art really is like.
As a result, I have become the late adopter among my group of friends because the first iteration of any new technology usually just isn't worth the issues.
You know this effect when you open a new book and immediately spot a typo? I felt that way looking at state of the art AI vision papers.
The first paper would crash, despite me using the same GPU as the authors. Turns out they got incredibly lucky not to trigger a driver bug causing random calculation errors.
The second paper converted float to bool and then tried to use the gradient for training. That's just plain mathematically wrong, a step function doesn't have a non-zero gradient.
The third paper only used a 3x3 pixel neighborhood for learning long-distance moves. Doesn't work, I cannot learn about New York by waking around in my bathroom.
That gave me the gut feeling that most people doing the research were lacking the necessary mathematical background. AI is stochastical gradient descent optimization, after all.
Thanks to TensorFlow, it is nowadays easy to try out other people's AI. So I took some photos of my road and put them through the state of the art computer visible AIs trained with KITTY, a self-driving car dataset of German roads. All of them couldn't even track the wall of the house correctly.
So now I'm afraid to use anything self-driving ^_^
It's followers see no other possible solution as acceptable.
Instead of putting our heads together to make really good driver assistance technologies and being satisfied, the darn thing needs to also drive itself everywhere otherwise we're leaving something on the table! Untill we have zero deaths, we cannot stop demanding self driving - as if somehow AI is a magical solution that's always better than humans.
Forget better frame, chassis, and body panel design to protect pedestrians! We won't need them if AI never hits anyone.
Forget better braking systems that apply themselves automatically. We don't need that if AI can always avoid the need for sudden stops.
Forget seat belt enhancements since that'll just inhibit nap time in my self driving car.
No, forget it all. My car needs to drive me all by itself, everywhere I need to go, no matter how hard or impossible of a problem that is. Driver assist is boring... Level 5 is sexy, and that's what I want!
What really bugs me, as someone working in consulting as a hands-on backend developer with a previous background in technical consulting at an accounting Big4 (student job), is the insane amount of PR, politics and marketing talk by people who have ABSOLUTELY NO IDEA what they are talking about. I witnessed politicians and C-level industry people talking out of their asses to ... idk ... drive stocks? Look smart in the face of Tesla? PR? No clue.
Self-proclaimed experts in magazines and talk shows raving about how AI is going to change everything. I had colleagues telling customers about the magnificent rise of AI and none of them could even spell "gradient descent". Backed by a law or accounting degree they KNOW that self-driving is just around the corner and they are very vocal about it while easily impressed by tightly controlled demos at some international tech fair. Everyone just seems to fall into the hype trap without a single brain-byte spent on researching the actual issues and what's most sad is actual engineers/technical people not doing their due diligence and informing themselves BUT THEN GOING OUT TO TELL THEIR NON-TECHNICAL FRIENDS ABOUT THE AUTONOMOUS FUTURE. Ugh.
I held a minor internal talk at work about self-driving cars for people generally with other backgrounds but light superficial interest due to "Tesla" and "the hype". They were surprised to see that we are likely decades away from actual Level 4 (not the marketing garbage that some companies put out) because even a slight change in weather can really fuck with all systems on the roads right now.
This is a glorious Gibson-esq turn of phrase that I hope goes down in history and is picked up in common vernacular.
Back on topic: The more I know about technology the less I want it or trust it to work. I assume this is similar to Pilots not wanting somebody else to fly, or surgeons not wanting to go under the knife.
We know, that we don't know enough about autonomous driving. Instead of the 'unwashed masses' saying "gee-whiz that is cool", we think "this isn't ready yet!"
And you know what? It's wonderful. It really is. I am free to look around and see other people on the freeway. They look so bored. So tired. So used up. Meanwhile my wife and I, we turn on the careoke machine and sing our way to the destination. It is absolutely the way to go. You can take my Tesla and its AP functions out of my cold dead hands. Maybe that's how I go, and I'm alright with it.
I have seen this too much. People who have a deep understanding of how things work are busy learning and doing. People who can spent their energy with politics and marketing can do it because they are not busy learning and doing. I which there was a solution for this. Corporate IT departments are particularly full of this.
Why? It never exceeded consumer expectations, which are extremely high for automated systems. Even a correctness rate of 99.9% means multiple errors per day for most people. Consumers expected approximately zero errors, despite not being able to achieve that themselves, sometimes even with their own handwriting!
Because handwriting is made by humans, there is some percentage of it that simply cannot be reliably recognized at all. But people hold that against computers more than other people because computers are supposed to be labor saving devices.
Likewise because roads are made by people, and other cars are driven by people, so a self driving car will never be able to be perfectly safe. But that is essentially what advocates are promising.
That’s especially true if people expect the same level of convenience, especially in terms of time. People speed and take risks all the time when driving, in the name of saving time. I think it’s likely that an autonomous car optimized for safety would also be a car that just takes a lot longer to get anywhere with.
Speed matters. It’s a big reason we all use touch keyboards on our phones instead of handwriting recognition.
Excellent point, stealing that. I work in automotive and an engineer, traveled around the world, think the realistic possibility of self driving cars without major changes in how we make roads, everywhere, is extremely low.
The only handwriting recognition system which ever worked correctly with a low error rate was Palm Graffiti. It forced the user to learn a new shorthand writing style designed specifically to avoid errors.
Because it asked users to learn a new way of writing, when the recognition failed, users were more likely to blame themselves, like, "Oh, I must have not done that Graffiti letter right, I'll try again."
But when it came to recognizing regular (i.e. natural) handwriting, users believed inherently (i.e. somewhat unconsciously) that they already knew how to write, and the machine was new, so mistakes were the machine's fault.
I think this supports the grandparent's point about using the actual strokes, including angle and azimuth, to reconstruct intent.
I was also fairly proficient with Graffiti, back in the day, but I consider that an input method, not handwriting recognition. I was facile with T9 as well.
There is little market demand for handwriting recognition, and thus little active research goes into it. Not because it is a difficult or problematic technology, but because better alternatives exist that make it irrelevant.
Even if someone were to come up with an absolutely perfect handwriting recognition system, most people wouldn't use it. Why? because the advent of multi-touch screens means that most people can type much faster than they can hand-write anyway.
What changed was that touch screens became better. The old capacitive touch screens were clunky, slow, inaccurate. You could put a keyboard on them, but the lag and poor accuracy meant you couldn't really touch type comfortably. Then multitouch came along and made on-screen keyboards much more responsive and accurate.
But also, Blackberry and (pre-smartphone) phones with SMS made people more comfortable with the idea of using keyboards for text entry on handheld devices. And crucially, auto-correct and predictive text entry covered up for accuracy errors and made text entry by keyboard even more attractive.
An other handicap for self-driving cars is that the problem is effectively harder at the start when the majority of the traffic will still be operated by human drivers who are a lot harder to predict reliably than an other autonomous vehicles.
Beyond that, I strongly believe that software engineering is still ridiculously immature and unable to deliver safe, reliable solutions without strong hardware failovers. We have countless examples of this. We simply don't have the maturity yet, we're still figuring out what type of screwdrivers we should use and whether a hammer could do the trick.
The visual recognition needed is well beyond the systems today.
Something like half your human brain is devoted to visual processing.
There's a tendency to think that things like language is what makes the human brain special, or our ability to plan or think abstractly, and we talk about things like "eagle eyes", but the truth is humans are seeing machines with most everything else as an afterthought.
The reason your cat will attack paint spots on glass for hours and flips the hell out about laser pointers is because their visual systems are too simple to distinguish between those and the objects that actually interest them, like insects.
Vision is not the easy part of AI.
I think it is, actually. Going from raw pixels to objects is the (relatively speaking) easy part. It's the next part (using that for planning and common-sense reasoning) that's the hard part. Machine learning has already advanced past humans in this regard for many classes of problems - which is part of the reason why captchas are getting so hard.
This was several years ago, hence the move away from obfuscated text (which was getting harder and harder to read): https://spectrum.ieee.org/tech-talk/artificial-intelligence/...
I'd be surprised if basic perception tasks as human-ness tests last more than a few more years.
It probably made a lot of sense in the Southern California design center. In Upstate New York, that camera is covered in road spray and salt, and my brain cannot see anything or act effectively without cleaning it. Even after doing that, it will get dirty again after a few minutes of driving.
I’d guess that a least a few dozen people will hurt by this decision.
Take this problem to the self-driving car and things get even worse. You’re going to have a lot of problems with sensor effectiveness that cannot be magically fixed with software.
I was waiting for my bus to work one morning after a large snowfall. The snow clearing crews were hard at work, but the street was effectively blocked by piles of snow, men, and machines.
Yet, my bus arrived on time *driving down the sidewalk".
I am not sure how any self-driving system could have figured that out :)
(I mean the ones who have been successfully marketed to here, not the marketers).
I got out by never really getting in I am afraid. I worked for 1.5 years in technical consulting as a student job while getting my informatics degree.
Once I obtained my degree, I declined an offer by said Big4 firm and took another offer where I got to go hands-on with coding. I had previous coding experience which helped and then amazing colleagues who boosted my start.
Fair enough it's not very good - that one just went 600m - but it's hard to argue it won't exist for decades when it exists now.
And historically going form sorta works but is rubbish in info tech eg. early cellphones, internet and so on - to works well doesn't seem to take that long. Five to ten years perhaps typically.
Going from early cellphones to smartphones was an engineering problem. All the technology was already available and it was a case of putting it all together in a way that worked and that could be manufactured at scale and for profit.
With vehicle autonomy, the problem is that we don't know how to create autonomous AI agents yet, so we don't know how to make a car driven by such an agent. Claims of level 4 autonomy should, at this point, be treated like claims of teleportation or time-travelling back in time: theoretically possible but we don't know how to make it happen in practice.
In reality, it seems like it would resolve quickly - you get out and yell at them, call the police if that doesn't work, etc. But it can get more sinister - criminals _already_ block the road to force drivers out of their car to rob them. Now, if you know that might happen, you can just drive around the obstacle. Unless, of course, you're in a self-driving car where you might not even be able to get it to do a u-turn. Related issues would be areas where the practical advice is "don't stop" - not even at red lights - if you're there late at night due to the risk of car jacking (this might be out-of-date now, to be fair). Can rules like that be encoded into a self-driving car?
OK, yes, you probably could find a way to do it. But that's almost certainly just the tip of the iceberg in terms of "ways people will fuck with self-driving cars" and "things people do that are technically illegal but still safer than the alternative." Could you solve enough of those in 5-10 years, _on top of_ making self-driving cars work in sun/rain/snow/fog/night/tornados/etc safely and consistently? I think that's very unlikely. Decades seems far more likely to me.
 (Non-EU only) https://wgntv.com/2015/03/31/robbers-set-up-fake-road-blocks...
It seems like it would take an endless list to cover every new edge case. Our technology is a amazing, but I almost think the edge case is places where autonomous driving makes sense.
The thing I trust the least is the operators they want to put in these cars. They better be completely autonomous, self-maintaining, and somehow tamper-proof. It's a really tall order, which I hope we do fulfill one day. But maybe I'm a pessimist, and they have it all figured out already.
Re all the edge cases - yeah that'll take a while.
You can certainly have all the cameras that you need but if the bad guys have their faces covered and identifying marks hidden then you're not going to be able to do much.
I'm pretty sure there is an early scene in the movie Solar Crisis that plays out similarly to what you're describing. This movie was on one of the cable movie channels when I was growing up, so I got a higher-than-normal dose of it.
I don't remember a cow, though (but then this probably isn't the only sci-fi movie out there with such a scene). I think one of the characters first parked a motorcycle in the road, but the truck plowed through it. After that, they stood in the road instead, and that caused the truck to screech to a stop right in front of them, blasting a message on a PA about how they were breaking the law by impeding it.
Ther is always a way.
Most car manufacturers have had this figured out for a long time with crumple zones and the like.
> Forget better braking systems that apply themselves automatically
Assisted braking technology is already implemented in some cars. Hell, Tesla implements basically exactly what you're asking for...
> Forget seat belt enhancements since that'll just inhibit nap time in my self driving car.
Teslas don't let you sleep in your car, you have to move the steering wheel periodically to prove you're still paying attention or it'll pull over and shut down.
Also, I'm not quite sure what you're expecting seat belt enhancements to be.
Far be it for me to defend the AI hype, but your "things we should be focusing on instead" don't make much sense when we ARE focusing on them.
1. Front crash test. Procedure: Crash car into stationary barrier at 35 mph. Is also applicable to face-to-face crash with car of same size, going at same speed.
2. Side crash test. Procedure: Slam concrete block into side of stationary car at 38.5 mph.
3. Side pole test. Procedure: Drag car sideways towards a pole.
4. Rollover resistance. Procedure: Compare the cars footprint to the height of the center of gravity.
The biggest thing to notice is that not one of these metrics involves pedestrians. Metrics 1-3 can be easily improved by making a bigger car, elevating the passengers and providing more crumple room. Metric 4 is unaffected, as the track width is increased to compensate.
If a low sedan hits a pedestrian, the pedestrian rolls over the car, having a lower impulse given over a longer period of time. If a high SUV hits a pedestrian, the pedestrian is knocked back, having a higher impulse given over a shorter period of time. Safety ratings need to account for the danger cars pose to others.
Source (SSF): https://www.safetyresearch.net/rollover-stability
In the U.S. at least, pedestrian safety concerns mainly affect prescriptive legislation (i.e. no pop up headlights). Some countries and blocs have testing similar to crash tests, but I'm not really sure how effective something like that is: any meaningful standard would need to have exceptions for different categories of vehicle. Though honestly I can't see that much can be done about pedestrian safety once your vehicle is colliding with a human being.
> I can't see that much can be done about pedestrian safety once your vehicle is colliding with a human being.
I don't agree, there some definitive choices in car design that affect the aftermath of the collision that can have effect on the pedestrian surviving. As pointed above if a pedestrian is hit by a car he has a better chance to roll over the hood of the car vs an SUV where the pedestrian would probably would be hit and fall under the car.
Obviously no one is going to survive if you hit them at 70, but you can make a big difference in the 25-35 region that is the normal speed where there are a lot of people around.
But wait, isn't that exactly why they are now in such a rush and cannot accept anything less than full autonomy?
From what I heard, Uber has been burning billions of VC money to capture market share. And their model just won't work financially if they need to pay drivers a living wage. So they attempted trickery to pay them less as "independent" contractors, but now that governments are stepping in to prevent that, there is only one option left:
Uber needs self-driving cars or else they'll go bankrupt.
At least, that's my theory.
I believe it--at least, I hope it would take an existential threat to make them push kludged-together pedestrian-killers into use on the roads.
An Uber driver is the very definition of an independent contractor. Many drivers also drive for Lyft. How can they be “employees” of Uber while also driving Lyft? Or doing Door Dash?
If Uber drivers were employees, they would have to work where and when assigned. As it is now, Uber drivers come and go as they please.
I am not sure how an Uber driver is any different than a freelance journalist or musician.
But I have a hard time believing that the technology is anywhere close to being mature. You need a lot of contextual knowledge to drive safely in unusual circumstances. I totally believe that within well-defined limits, AI already outperforms humans, but traffic has no well-defined limits. Anything can happen.
Not sure that's what the self-driving cars proponents envisioned though.
I'm bearish on both self-driving and the universal adoption of electric cars, but everyone will be plugging their car in at home long before they can make one that drives itself to the gas station.
Absolutely not. Having to wait for hours to get a few kms of driving distance is way too much of a friction point for EVs to ever be more than a novelty, in addition to the usual complaints of people with only street parking, garages without outlets, etc. Either we'll get the battery-swap situation rolling or invent a faster charging tech, but either way there'll be some sort of "station" in the picture.
I realize that a lot of people here are privileged enough to own a single family home, but the majority of humanity lives in apartments and parks on the street. Trickle-charging at home is not a universal solution. The only practical solution seems to be some form of rapid charging of the car's energy storage. Either by pumping huge amounts of amps into a huge battery pack, or adding some kind of chemical fuel that gets reacted in an internal combustion engine or a fuel cell.
At the moment, policy in Amsterdam is that if you own an electric car, you get a charging point in your street. I don't know how fast those are, but they don't have to be fast. They're still useful for overnight charging, especially if the city continues to add more when more people get electric cars. I don't understand the argument that this is not in any way a solution. It is.
It is also completely bonkers, as it makes bold assertions about the real world, with which reality will eventually disagree, whereas older religions tended to keep their most dogmatic positions unfalsifiable (the afterlife, the soul, vague prophecies, ...)
It is clearly a goal and a realistic one within even a few decades pessimisitcally given that its ability is creeping upwards.
There’s lots of proof that autonomous vehicles are technically possible, but the leap to “definitely better than humans” is a very big one and it’s really being taken on faith right now.
In contrast, treating disease directly affect the incidence of that disease.
For example, we could have embedded special markers (or spikes with RFID, or more active wireless, or any number of things that could have provided far more accurate lane detection as long as we were willing to require some up-front work to deploy it for special routes. Combined with very reliable lane detection and restricted to specific deployed areas where it could be tested, computer vision and/or radar/lidar for vehicle and large object detection (which would be mostly sufficient for most highway/freeway use) could likely provide a very safe system. The lower requirements for achieving safety might mean we could actually get some busses outfitted as well.
But that would require some actual state action, as no private company would (or should, if they were to keep it proprietary at least) deploy along large stretches of highway freeway. Covering I-5 from Northern California to Southern California would provide many opportunities, but be an enormous cost.
This is Uber's endgame - get humans out of the loop. As long as cars still need human drivers this cost savings can't be realized.
> Forget better braking systems that apply themselves automatically.
This already exists: Mercedes, for example, first rolled out "Active Brake Assist" to a production model in 1996. Moreover any fully self driving vehicle will definitely need to be able to apply the brakes.
A switch from 20/80 to 80/20 is transformative, changes the default attitude and has further societal effects.
Like parking or more specifically - not parking. Here's an example development: "And parking clocks in at a full 29% of the developed land here, taking up twice as much total space as the actual buildings."
Not to mention sending a car out for errands. Self driving will free up parking garages, but fill up the roads.
I think the manufactures will sell cars to those want to run a taxi company. This is, and will remain a race to the bottom because competition is fierce: people buy price and first to get to them.
They will also sell cars to normal people. Most people don't think about how convenient it is to have the storage of their personal car. If parking is free or cheap (which is it in the suburbs) having your golf clubs in the trunk or a spare diaper in the glove box are worth the little extra cost, not to mention your car is always there so no need to wait for the taxi to in busy times.
Of course there is also the city, but if you live in a city self driving cars still suffer from congestion and so public mass transit still has big advantages. In fact because the self driving taxi is spending time empty waiting for the next rider it adds to the problem meaning for even more people public transit is worth the hassles with it (which in turn means more demand to make transit better)
A self-driving car is software and hardware... you can sell hardware but software only gets licensed. Software is really where the value is, and that won’t be owned by anyone but the company that made it.
Look at the history of mobile phones, which were originally sold to consumers only by the carriers. As the phones got better and better software, the business model attached more and more to the manufacturer.
On the way something like UberPool would be needed. Or the taxi can just drive me to the nearest train station and I can take mass transit into the city.
I doubt it. One of the biggest obstacles you have is families and the need for infant car and child booster seats. Car ownership is here to stay for a while.
It does require some planning (and some support from the service providers) but it's definitely a solvable issue, if the people would want to do that, then it can be arranged.
As a passenger, there is no difference to me whether my car/bus/train is self-driving or not. As long as I'm not driving, it doesn't matter if a meat, or a silicon neural network operates it.
Given the above, how does this make self-driving a transformative technology?
That said, there are legitimate issues with incrementally improving assistive driving. People text and drive today without assistive driving. If a car can mostly make its way in rush hour down the highway autonomously, does anyone think that people won't routinely watch Netflix on their commute?
I suspect it's because it's less interesting to the demographic that's more concerned about being driven to and from bars on their night out or who just don't want to own a car period. But highway driving seems like a huge convenience and safety enhancement even if you just punt on city driving for at least the next few decades.
Frame and chassis have never been safer and manufacturers continue to improve. Many (most) new cars have automatic emergency braking that continue to improve. New cars seem to have an ever increasing number of air bags to protect passengers.
All these things are happening at the same time that self-driving is taking place. Tesla FWIW is pretty good at all the above despite their focus on self driving as well.
I agree it is just as ridiculous to say we need level 5 to be make something useful. Will it be decades before we have cars without steering wheels? Sure, maybe even longer. But what exists is already pretty great in most environments and only getting better. (IE, crawling along in a traffic jam at ~15mph is something I would really love to never do again and it seems self driving systems can handle this with aplomb these days)
Self-driving is only as hyped as it is due to the futuristic lure of the idea.
Just follow the money. The near future financials of companies like Uber and Lyft (and to a lesses extent Tesla), rely on fully autonomous self driving.
for cars to be safe for drivers and riders, we need to optimize two things and strip away the rest (especially an over-reliance on technology as savior):
1. minimize distraction and maximize attention on the act of driving
2. maximize the skill of the driver in controlling the vehicle in all sorts of (unexpected) conditions
technology can actually reduce safety, either because it allows drivers to pay less attention or it lowers the skill bar. driver assist technologies--lane assist or automatic braking--fall into this category.
that's not to say safety technologies shouldn't continue to be developed--structural crash safety improvements, for example, don't have the same detrimental effects on driver attention or skill (with the caveat that ever-increasing weights can decrease control and increase lethality).
it's important to distinguish technological advancements that acually improve safety rather than our perceptions of it.
He's still a bit too optimistic for my taste.
As cool as Comma.ai is, I really believe that their approach to allowing so much community involvement with little to no oversight is highly irresponsible.
That being said.... If they do succeed, and get some sort of government approval or oversight.... You bet I'm putting that stuff back in. Its cool A.F.
example: stop-and-go traffic - instead we could unlock millions of hours of human productivity (or provide entertainment).
example: self-parking and come-to-me, esp in closed garages. Parallel parking is hard for humans and we're poor at space utilization.
example: environments where obstructions are unlikely... airplanes have had auto-pilot for a long time... why not highway 80 in the middle of nowhere? why not trucks queuing to load/unload containers at port (or conference centers) - just drive up your incoming truck, grab your personal effects and take over the next outgoing truck while it queues for hours delivering a load and getting the next load.
There's lots of uses for self driving vehicles even before we deal with the hard cases. But they're not sexy and of course a tiny fraction of the labor savings and freedom-making.
You don’t get a stalled car on a Victor airway in the sky. You also don’t have to worry about obstacle avoidance for the most part, in the sky. If an aviation autopilot can’t hold the altitude or heading (such as in turbulence or in mountain up and downdrafts,) it will simply keep the wings level. Airplane autopilots follow explicit instructions: fly heading 143 at 14000 feet; descend at 500 fpm. Hold over a VOR using 1 minute legs at 200kts.
A car autopilot on the other hand, has to react to the physical surroundings. Not only “follow the I-10 at 75mph,” but also, watch out for incoming traffic, lane closures, or some kid on a bicycle that wanders into the road, or a dead animal in the road, or wet roads, icy roads, etc. There is no such thing as Instrument Flight Rules for driving, meaning a car autopilot has to be aware of the visual, while an airplane doesn’t: it just flies the precise route programmed without any awareness that the route might fly through a flock of geese. An airplane autopilot will fly you right into the ground if you let it. There is a lot of skill and training around airplane autopilot, and while it’s amazing and useful, it’s a lot more than simply turning it on and it flies you automatically to Denver.
When you have thousands of machines traveling in close proximity at speeds exceeding 50mph there will be deaths, this is unavoidable. We need to reduce those as much as possible but to demand ZERO before the technology can be used is just unrealistic
That said, Just because some people are working towards Level 5, does not mean all of the other things you are asking for are not also being worked on, it is not a zero sum game. There are enough people that we can have teams working on both.
This complaint is repeated for everything, "Well if people were not working on X drug that I don't care about they could cure cancer"
We can have a better braking system, better frames, etc AND still try to achieve level 5 autonomous driving. It is not an either-or proposition
Also at the very least an self-driving car should reach the level of a good driver, having self-driving cars cause as many deaths as drunk or inattentive drivers do nowadays isn't defensible. Especially since there's usually no explanation and nobody to hold accountable.
And a scenario we can easily imagine is that a buggy update goes out to the whole fleet overnight that starts killing people all over the place.
The common case of accidents being on par with manual human driving goes out the window until the software is rolled back and for 12 hours, 24 hours, however long, we get a number of deaths that far outpaces what humans are capable of. The "worst case" would never apply to a manual/human population as a whole, at once.
Well, one of the issues is that someone more or less has to be held accountable. And that someone pretty much has to be the manufacturer. No one is going to hand over full control to a vehicle and accept the responsibility if that vehicle commits vehicular manslaughter because "software isn't perfect."
It's actually an interesting legal situation because, other than maybe drug side effects, there aren't a whole lot of consumer products which, properly used and maintained, sometimes randomly kill people and we're OK with that because sometimes stuff just happens.
Who's responsible if you get pulled over for going 75 in a 65 mph zone?
How do we go about testing this? By tallying up autonomous deaths until there are fewer per year than human drivers?
>We need to reduce those as much as possible but to demand ZERO before the technology can be used is just unrealistic
Human driver skill varies immensely by person. The idea that anyone who is (or even considers themselves to be) a "good" driver will never accept "average death rates" as a risk when getting in an autonomous car. I know I wouldn't.
The goal has to be zero or it will never be accepted by the public.
What will happen is human-controlled cars will become $$$$$$ to ensure once Level 5 is better than humans. At that point, if you can afford it sure you can reject it but get out your wallet
Real life and ideals are different thing. You can't promise that accidents will never happen. But you can promise that accidents will be substantially reduced.
In the US, about 35k people die per year from motor vehicle related deaths. If you get it down to 10k, then that would be a major success. Of course, you will still be fine tuning until you could get below 1000 and as close to 0 as possible.
If we were actually serious about reducing motor vehicle deaths, we would mandate that every car be equipped with a breathalyzer device. No fancy new technology is necessary, and there's plenty of low-hanging fruit (Impaired driving) that we can deal with.
For some reason, though, the religion of autonomous driving does not consider this as a solution to minimizing road fatalities.
On average, humans are actually pretty good at not dying in motor vehicle related accidents - or avoiding them altogether, given the sheer number of miles traveled per day in the US.
That, however, just isn't the narrative Self-driving followers want everyone to know.
> Forget better braking systems that apply themselves automatically. We don't need that if AI can always avoid the need for sudden stops.
Source? I never seen anyone arguing theses things for the sake of self driving. Are you just assuming that because people that want self driving really want self driving and the auto industry couldn't possible works on 2 things at the same time?
Because that's the law... and...
> They don't even try Level 5 for now
That's not what Elon has been telling us for years... Full Level 5 is just months away!
Perhaps it will be limited to specific "lanes" (it will be much more palatable to the masses if it must (like cars) keep to a limited area). But it will not need to recognize pedestrians and bikers and human driven cars, and some standard will be introduced to allow all them self driving cars to talk to each other.
At that point, Level 5 will be much easier, even obvious. All he effort invested in assistive driving will seem silly.
If they want to land at some point presumably they have to avoid landing on these things?
I don't think safety space is a good criterion for predicting how widespread a means of transportation will become.
A fast mode of transportation, with a large safety space requirement, may be more efficient than other modes and/or become popular.
Now if we change to flying buses we might be able to pull something off, an express bus that picks up people for a few stops in the suburb for 10 minutes, then flys at >150mph downtown is a very compelling competitor to a car and will get anybody who currently drives 20 to minutes downtown to ride if the cost is reasonable (those in closer in suburbs will still drive). I don't think the business or environmental models work out.
Not so sure. I saw specific designs hyped in the 80s and 90s, and know of efforts hyped in the 70s as well. The reason they always fail is because constraints of designing a car to go on land and the constraints of designing a flying machine are different. You can sorta kinda build something that does both, and people have, time and again, but it will be good at neither of those things. Well, unless we develop "antigravity" or something...
Add the possibility of huge damage caused from a failing/failing/crashing flying car, not just to the road and other cars there (like with a car) but to any building, group of people, etc. If it was a car-replacement (and thus getting on it was laxer than flying a plane, with flight plans, airport checks, special licenses), it would also be perfect for suicide terrorism too!
Here's a funny but insightful post I've found, hammering on the topic:
Listen to most discussions of flying cars on the privileged end of the geekoisie and you can count on hearing a very familiar sort of rhetoric endlessly rehashed. Flying cars first appeared in science fiction—everyone agrees with that—and now that we have really advanced technology, we ought to be able to make flying cars. QED! The thing that’s left out of most of these bursts of gizmocentric cheerleading is that we’ve had flying cars for more than a century now, we know exactly how well they work, and—ahem—that’s the reason nobody drives flying cars.
Let’s glance back at a little history, always the best response to this kind of futuristic cluelessness. The first actual flying car anyone seems to have built was the Curtiss Autoplane, which was designed and built by aviation pioneer Glen Curtiss and debuted at the Pan-American Aeronautical Exposition in 1917. It was cutting-edge technology for the time, with plastic windows and a cabin heater. It never went into production, since the resources it would have used got commandeered when the US entered the First World War a few months later, and by the time the war was over Curtiss apparently had second thoughts about his invention and put his considerable talents to other uses.
There were plenty of other inventors ready to step into the gap, though, and a steady stream of flying cars took to the roads and the skies in the years thereafter. The following are just a few of the examples. The Waterman Arrowbile on the left, invented by the delightfully named Waldo Waterman, took wing in 1937; it was a converted Studebaker car—a powerhouse back in the days when a 100-hp engine was a big deal. Five of them were built.
During the postwar technology boom in the US, Consolidated Vultee, one of the big aerospace firms of that time, built and tested the ConVairCar model 118 on the right in 1947, with an eye to the upper end of the consumer market; the inventor was Theodore Hall. There was only one experimental model built, and it flew precisely once.
The Aero-Car on the left had its first test flights in 1966. Designed by inventor Moulton Taylor, it was the most successful of the flying cars, and is apparently the only one of the older models that still exists in flyable condition. It was designed so that the wings and tail could be detached by one not particularly muscular person, and turned into a trailer that could be hauled behind the body for on-road use. Six were built.
Most recently, the Terrafugia on the right managed a test flight all of eight minutes long in 2009; the firm is still trying to make their creation meet FAA regulations, but the latest press releases insist stoutly that deliveries will begin in two years. If you’re interested, you can order one now for a mere US$196,000.00, cash up front, for delivery at some as yet undetermined point in the future.Any automotive engineer can tell you that there are certain things that make for good car design. Any aeronautical engineer can tell you that there are certain things that make for good aircraft design. It so happens that by and large, as a result of those pesky little annoyances called the laws of physics, the things that make a good car make a bad plane, and vice versa. To cite only one of many examples, a car engine needs torque to handle hills and provide traction at slow speeds, an airplane engine needs high speed to maximize propeller efficiency, and torque and speed are opposites: you can design your engine to have a lot of one and a little of the other or vice versa, or you can end up in the middle with inadequate torque for your wheels and inadequate speed for your propeller. There are dozens of such tradeoffs, and a flying car inevitably ends up stuck in the unsatisfactory middle.
Thus what you get with a flying car is a lousy car that’s also a lousy airplane, for a price so high that you could use the same money to buy a good car, a good airplane, and a really nice sailboat or two into the bargain. That’s why we don’t have flying cars. It’s not that nobody’s built one; it’s that people have been building them for more than a century and learning, or rather not learning, the obvious lesson taught by them. What’s more, as the meme above hints, the problems with flying cars won’t be fixed by one more round of technological advancement, or a hundred more rounds, because those problems are hardwired into the physical realities with which flying cars have to contend. One of the great unlearned lessons of our time is that a bad idea doesn’t become a good idea just because someone comes up with some new bit of technology to enable it.
When people insist that we’ll have flying cars sometime very soon, in other words, they’re more than a century behind the times. We’ve had flying cars since 1917. The reason that everybody isn’t zooming around on flying cars today isn’t that they don’t exist. The reason that everybody isn’t zooming around on flying cars today is that flying cars are a really dumb idea, for the same reason that it’s a really dumb idea to try to run a marathon and have hot sex at the same time.
Current-resolution lidar, cameras, and radar seem to provide sufficient sensor input. The costs are too high by a long shot, but that may just be a question of getting economies of scale established. Current PC graphics hardware has sufficient bandwidth to process those sensors. I don't think you can just throw current neural net training at the problem and get Type 5 autonomy out of it - there will be lots and lots of engineering hours in figuring out what to do with that sensor data - but that's just a problem of doing many man-years of straightforward work.
Flying cars don't have adequate power from a current-gen internal combustion engine running on petroleum, and especially not enough power from lithium-ion batteries and electric motors. If you could get a power source that provided an order of magnitude or two greater power density than the best of those technologies, flying cars would be viable. Until then, no amount of engineering hours will make it work.
So essentially thousands of flying nuclear reactors piloted by average joes around the city.
That sounds really safe!
You can get around that by using an electric transmission. A turbine drives an alternator which drives 2 sets of electric motors one for the wheels and one for the propellors. As to the rest of the post it’s attacking a straw man. I don’t think people want a highway capable car that can also fly. If you can fly why drive on the highway?
A 50k to 100k VTL ‘flying car’ with maximum cruse speed of 80 MPH, maximum altitude of 10,000 feet, a range of 500 miles, room for 2+ people, and a cargo capacity of 1,000lb including people fits most people’s definitions of a flying car. Being able to move around on the ground at say 15 to 25MPH without giant spinning blades would also be a great feature.
Oddly enough I think we already have something close to flying motorcycles in autogyros, but the closest thing to a flying car is a vanilla small flying airplane and those run you 250k new.
PS: There is even something of a jet pack alternative https://www.youtube.com/watch?v=bpwd-T2Qvbk
"By 2000 or so that curve had flattened out in the usual way as PV cells became a mature technology, and almost two decades of further experience with them has sorted out what they can do and what they can’t."
In fact, PV prices have dropped DRAMATICALLY since 2000 (https://www.sciencedirect.com/science/article/abs/pii/S03014... looks at the different trends 1986-2001 and 2001-2012), as have the prices/performance of the energy storage systems needed to make them practical.
I agree that it's not a silver bullet that will solve the fossil fuel crisis all on its own (at least not in time), but it is in line with the broader improvement in renewable costs and efficiency.
Can't open this, but the abstract also shares this:
"Market-stimulating policies were responsible for a large share of PV's cost decline"
This part is artificial though (subsidies, etc).
No idea if it's even theoretically possible, but would be neat.
Blade shape can be tuned to an extent for minimizing noise but that also reduces efficiency (not much to spare). Larger, slower turning blades are also quieter but there are practical physical limits on size and weight.
If we care about noise, we should be addressing the motorcycle industry. I can’t hear a C-130 flying overhead at 1000 feet when I am in my house, but I can hear motorcycles zipping by on the freeway behind my house.
This is maybe one of the most important lessons of the 20th and 21st centuries (at least thus far): knowledge does not automatically prevent us from errors in judgments nor does it necessarily protect us from misfortune.
I was in this very situation at the top of Pikes Peak. A storm moved in and the park rangers closed the place and sent everybody away. They knew the statistics of being hit by lightning in that very place.
I also don’t get how everyone is forgiving him for being on his phone, in a construction zone no less. Reckless driving is reckless driving, being an Apple Engineer and Tesla owner doesn’t somehow negate that he was being a belligerent driver.
And, Autopilot being a technology that, among other things, enables (or even encourages) idiotic behavior, there's real risk that placing too much blame on the driver's choices lulls us into an attitude that enables the next idiot to kill themselves and/or someone else.
This guy was on his phone in a construction zone and crashed because of his lack of intervention, using a driving assist feature doesn’t somehow absolve him from being so preoccupied that his vehicle veered off the road and into a wall. Imagine if instead of him dying he had killed a construction worker; I have no doubt a jury would find him guilty of manslaughter. When you get into the driver’s seat of a car you are taking on responsibility for a death machine. I find it troubling that this conversation is happening at all, the blame should be put squarely on his shoulders.
Iff the car suddenly slammed the wheel to the side causing him to lose control or became unresponsive to his inputs, that would be another matter. But this could have been prevented if he wasn’t being grossly negligent of the risk he was partaking in behind the wheel.
(Unless you're plaintiff's counsel, of course.)
Only when comparing aggregate statistics. Not all humans are equally (un)safe drivers; insurance rates vary based on driving record for good reason.
I have a Tesla and there are definitely problem areas. You learn them fairly quickly when you are taking the same route all the time and you’re trained to either turn off autopilot or at least be alert when going through those areas. Or maybe you test it out with your hands on the wheel ready to take over to see if they fixed the bug.
There were a couple spots on my normal driving routes where the car would inexplicably swerve. It happened one time in each spot and that was enough. Both those spots have been fixed since, but there’s no way I’d be on my phone not paying attention driving through there. I’m still cautious. There are two more spots where the car will brake to 45 mph on the highway and then speed back up after a few hundred feet. I am always on high alert around there and usually won’t even use autopilot in those areas.
It's a well known phenomenon that the more you automate away routine tasks, the harder it is for the driver to stay alert and take over in non-routine situations. My understanding is that airplane pilots are specifically trained in strategies to avoid falling into that trap.
Isn't holding the wheel always required now in Tesla's autopilot?
> Records from an iPhone recovered from the crash site showed that Huang may have been using it before the accident. Records obtained from AT&T showed that data had been used while the vehicle was in motion, but the source of the transmissions couldn't be determined, the NTSB wrote. One transmission was less than one minute before the crash.
For all we know, that could mean he had spotify on.
Anyhow, "he was worried about it" is no reason to shift the blame to him.
> Recovered phone logs show that a strategy game called “Three Kingdoms” was “active during the driver’s trip to work,” the NTSB said. Investigators said the log data “does not provide enough information to ascertain” whether Huang “was holding the phone or how interactive he was,” though it said “most players have both hands on the phone to support the device and manipulate game actions.” Huang’s data usage was “consistent” with online game activity “about the time of the crash,” according to the NTSB.
Or does Tesla allow you to set the speed?
(Definitely not singling out Apple here.. at IBM I had a coworker who was in an accident during a phone meeting- luckily non-fatal).
Frankly the whole point of automated control is to reduce this kind of mistake fatality, and... I mean, it's working. This was a tragedy for sure, but it was also fully two years ago. These events haven't been recurring, it's likely the specific proximate causes have been addressed, and by all reckoning these systems (while still not flawless!) are at or above parity with alert human drivers in terms of safety.
Basically, I read the same facts you do and take the opposite conclusion. People make bad risk/reward decisions all the time, so we need to take them out of the loop to the extent practical.
Granted, we're not quite ready for self driving, but there's no question that the neural network subfield of ML has absolutely exploded in the last 5-10 years and is bursting with productionizable, application ready architectures that are already solving real world problems.
There is no doubt in my mind that an AI with billions of parameters will be excellent at memorizing stuff.
I also have no doubt that research activity has exploded, which might be related to the generous hardware grants being handed out...
But all that research has produced surprisingly little progress over algorithms from 2004 in the field of optical flow.
The papers I looked at were the top-ranked optical flow algorithms on Sintel and KITTY. So those were the AIs that work best in practice and better than 99% of the other AI solutions.
If it's as bad as you say, it seems like a critical evaluation would be pretty interesting and advance the field.
Source: Used to work for an 80s-era "AI Company"
We're way past memorization. We're into interpolation and extrapolation in high D spaces with Bayesian parameters. Sentiment analysis and contextual searches - search by idea, not keyword. Heuristic decision making. Massively accelerated 3D modeling with 99% accuracy. Generative nets for text, music, scientific models...
Sorry, but you're behind the times, and that's ok - one of us will be proven right in the next 1-5 years. Based solely on the work we're doing at the startup I'm working for, we're on the cusp of an ML revolution. Time will tell, but personally I'm pretty excited. And don't worry, I'm not working in adtech or anything useless.
That said, the driving problem seems to be quite far from being solved, I agree though it is outside my expertise; but I think the primary issue is that this is an application where error must be unrealistically low, a constraint which does not apply to many other domains. You can get away with a couple percent of uncertainty when people's lives aren't on the line!
And yet it's literally in cars on the road.
I'm not saying you're wrong because of that. I just wonder how far from "ready" we are, and how much of a gamble manufacturers are taking, and how much risk that presents for not just their customers, but everyone else their customers may drive near.
It is not. There is no real self-driving on the road, at least not in conventional vehicles.
Teslas autopilot is basically a collection of conventional assistive systems that work under specific circumstances. Granted, the circumstances where it works are much broader than the ones defined by the competition, but for a practical use-case its still very restricted.
Self-driving systems can be affected by very minor changes in lighting, weather and other circumstances. While Teslas stats on x Million miles driven under Autopilot are impressive, they do not show the real capabilities of the self-driving system. For example, you can only enable the Autopilot under specific circumstance for example while driving on an Autobahn with clear weather. Under circumstances with for example limited sight the Autopilot won't turn on or will hand over to the driver, simply because it would fail. Of course, this is for passenger security, but these are situations real self-driving vehicles need to handle.
Other leading projects like Waymo also test the vehicles under ideal circumstances with clear weather etc.
We'll most likely see fully self-driving vehicles in the future, but this future is probably not as close as Tesla PR makes us think.
Emphasis on real. There is definitely something that most people would refer to as "self driving" in cars on the road.
I'm not saying what is there is specifically good at what it does - I'm saying someone put it into use regardless of how fit for purpose it is.
> but this future is probably not as close as Tesla PR makes us think
Unless you're suggesting the PR team decided to make shit like "Summon" available to the public, then it's not just "PR spin".
Then you'd have to define what a self-driving car actually means. At least for me, self-driving means level 4 upwards. Everything below I'd consider assisted driving.
> Unless you're suggesting the PR team decided to make shit like "Summon" available to the public, then it's not just "PR spin".
As I said, this Smart Summon feature also only works under very specific circumstances with multiple restrictions (and from what I've seen on Twitter it received mixed feedback)
Just because the car manages to navigate a parking lot with 5km/h relatively reliable, that doesn't mean that at the end of the year it'll be able to drive the car with 150 km/h on the Autobahn.
Edit: Fixed my formatting
I said "for most people". For most people I know, a car that will change lanes, navigate freeways and even exit freeways is "self driving". It may be completely unreliable but even a toaster that always burns the bread is called a toaster: no one says "you need to define what a toaster means to you".
> At least for me, self-driving means level 4 upwards.
I have literally zero clue what the first three "levels" are or what "level 4" means, and I'd wager 99% of people who buy cars wouldn't either.
> that doesn't mean that at the end of the year it'll be able to drive the car with 150 km/h on the Autobahn.
30 seconds found me a video on youtube of some British guy using autopilot at 150kph on a German autobahn last June.
Again: I'm not suggesting that it is a reliable "self driving car". I'm suggesting that it is sold and perceived as a car that can drive itself, in spite of a laundry list of caveats and restrictions.
This argument is leaning toward the ridiculous.
I think only you and Elon Musk consider a "greater than zero chance of making it to your destination without intervention" to be self-driving.
>Autopilot enables your car to steer, accelerate and brake automatically within its lane.
>Current Autopilot features require active driver supervision and do not make the vehicle autonomous.
Were not at the self-driving level of kicking back the seat and watching netflix on your phone yet.
I doubt we will ever get there; there will always be edge cases which are difficult for a computer to grasp. Faded lane marking, some non-self-driving car doing something totally unexpected, extreme weather conditions limiting visibility for the camera's etc.
-This is the scariest bit, IMHO. Basically, autopilot is well enough developed to mostly work under normal conditions; humans aren't very good at staying alert for extended periods of time just monitoring something which mostly minds its own business.
Result being that the 'assist' runs the show until it suddenly veers off the road or into a concrete barrier, bicyclist, whatever. 'Driver' then blames autopilot; autopilot supplier blames driver, stating autopilot is just an aid, not a proper autopilot.
This is the worst of both worlds. Driver aids should either be just that - aids, in that they ease the cognitive burden, but still require you to pay attention and intervene at all times - or you shouldn't be a driver anymore, but a passenger. Today's 'It mostly works, except occasionally when it doesn't' is terrifying.
A model where a driver is assumed to disengage attention, etc but then be expected to rengage in a fraction of a second to respond to an anomalous event is fundamentally at its core flawed I think. It's like asking a human to drive and not drive at the same time. Most driving laws assume a driver should be alert and at the wheel; this is what...? Assuming you're not alert and at the wheel?
As you're pointing out, this leads to a convenient out legally for the manufacturer, who can just say "you weren't using it correctly."
I fail to see the point of autopilot at all if you're supposed to be able to correct it at any instant in real-world driving conditions.
-The cynic in me suggests we need autopilot as a testbed on the way to the holy grail of Level 5 autonomous vehicles.
The engineer in me fears that problem may be a tad too difficult to solve given existing infrastructure - that is, we'd probably need to retrofit all sorts of sensors and beacons and whatnot to roads in order to help the vehicles travelling on it.
Also, highway lane splits are very dangerous in general. It's a concrete spear with 70mph cars whizzing right towards it. Around here, they just use barrels of material, sand I believe. Somebody crashes into one, they clean the wreck, and lug out some more sand barrels. Easy and quick.
For the foreseeable future, there's simply too many variables outside autopilot manufacturers' control; I cannot see how car-borne sensors alone will be able to provide the level of confidence needed to do L5 safely.
Oh, and a mix of self-driving and bipedal, carbon-based-driven ones on the roads does not do anything to make it simpler, as those bipedal, carbon-based drivers tend to do unpredictable things every now and then. It'll probably be easier when (if) all cars are L5.
So - humans need to adapt to new behaviour from other vehicles on the road.
When ALL vehicles are L5, though, they (hopefully) will all obey the same rules and be able to communicate intent and negotiate who goes where when /prior/ to occupying the same space at the same time...
And, of course, they should all obey the same rules (well, traffic regulations being one, but also how they handle the unexpected - it would be a tough sell for a manufacturer who rather damaged the vehicle than other objects in the vicinity in the event of pending collision if other manufacturers didn't follow suit...
Autonomous Mad Max-style vehicles probably isn't a good thing. :/
Yes, it's a hard problem, yes we are not nearly there and there is a lot of development/research to do. Yes, accidents will happen during the process. But humans suck at driving and kill themselves and other people daily. It's the least safe form of transportation we have.
Based on Tesla's safety report, 'more than 1 billion' miles have been driven using autopilot. Given the small data sample and the fatalities already attributed to autopilot, I think we're some way from proving it's safer than letting drivers drive alone, never mind close to being a driver substitute.
>> After accounting for freeways (18%) and intersections and junctions (20%), we’re still left with more than 60% of drivers killed in automotive accidents left accounted for.
>> It turns out that drivers killed on rural roads with 2 lanes (i.e., one lane in each direction divided by a double yellow line) accounts for a staggering 38% of total mortality. This number would actually be higher, except to keep the three categories we have mutually exclusive, we backed out any intersection-related driver deaths on these roads and any killed on 2-lane rural roads that were classified as “freeway.”
>> In drivers killed on 2-lane rural roads, 50% involved a driver not wearing a seat belt. Close to 40% have alcohol in their system and nearly 90% of these drivers were over the legal limit of 0.08 g/dL.
I don't think people give enough attention to whether broad statistics actually apply to cases of interest. That's about 40% of all driver fatalities occurring on rural non-freeway roads, of which 35% (~14% overall) were legally driving drunk.
People compare various fatality rates associated with riding an airplane vs driving a car all the time, but I've never seen anyone point out that an incredibly simple mitigation you're probably already doing -- not driving on non-freeway rural roads -- lowers your risk of dying in a car accident by more than a third. And it gets even better if you're not driving drunk!
If you measure driving quality in terms of fatality rate, it is actually the case that almost everyone is better than average. A lot better than average. But public discussion completely misses this, because we prefer to aggregate unlike with unlike.
If half of all driving occurs on highways and half doesn’t, and half of all accidents are on highways, then avoiding highways will have absolutely no effect on your accident rate.
It’s possible that driving on these roads leads to a disproportionate accident rate, but you haven’t actually said that.
You're right in spirit. I actually addressed this in passing in the comment "an incredibly simple mitigation you're probably already doing". Rural roads carry less traffic than non-rural roads for the very obvious reason that most people don't live in rural areas. The disparity is documented: https://www.ncsl.org/research/transportation/traffic-safety-...
We can also note that freeway vehicle-miles (excluded from this rural roads statistic) are going to be an inflated share of driven miles precisely because the purpose of the freeway is to cover long distances.
But as to the specific number I provided ("more than a third"), you're on target in accusing me of a fallacy.
The numbers show that Teslas experience a lower crash rate than other vehicles. Granted, this can be to a number of reasons including the hypothesis that humans deciding to buy Teslas drive more carefully to begin with. And the numbers show that turning on autopilot further reduces crash rates.
This at least tells us that letting the vehicles with the automated driving and safety features on the road doesn't increase the risk for the driver and others, which was the original premise I responded to.
- The mechanical state of the car (Teslas with autopilot tend to be new/newish vehicles, and thus in excellent mechanical shape)
- The age and competence of the driver - I'm guessing people who make enough to buy a Tesla are usually not senile 80 years olds or irresponsile 18 year olds
- Other security gizmos in Teslas that cheaper cars may lack
Overall, it would be more fair to compare against cars of similar age and at similar price point.
It kinda seems self evident that a car that drives you into a wall randomly is less safe than one that doesn't.
I grant that Teslas might be safer than eg a drunk driver, and so we might be better off replacing all cars with Teslas in some sense, but we'd also be better off if we replaced drunk drivers with sober ones. But would safe, competent drivers be safer, and would that be ethical? At that point are you penalizing safe competent drivers?
Drunk drivers in Teslas are actually interesting for me to think about, because I suspect they'd inappropriately disengage autopilot at nontrivial rates. I'm not sure what that says but it seems significant to me in thinking about issues. To me it maybe suggests autopilot should be used as a feature of last resort, like "turn this on if you're unable to drive safely and comply with driving laws." But then shouldn't you just not be behind the wheel, and find someone who can?
Unless you're serious about bringing the bar way up for getting a driver's license, I think it's fair to compare self-driving technology with real humans, including the unsafe and incompetent. In most of the world, even those caught red-handed driving drunk are eventually allowed to drive again.
Have they released all the data to be analyzed by independent people?
Also autopilot only runs in the best of conditions. Are they comparing apples to apples?
You mean the company that has staked it's future on selling this technology claims the technology is better than any alternative?
This is aside from the fact that the NHTSA says the claim of "safest ever" is nonsense and that there is zero data in that PR blog post.
A fun example, someone was selling some meat, he said it is 50% rabbit and 50% horse because he used 1 rabbit and 1 horse. The conclusion is when you read some statistics you want to find the actual data and find if statistics are used correctly, most of the time as in this case the people doing the statistics are manipulating you.
There was an article about a city in Norway with 0 deaths in 2019, if I would limit my statistics to that city only, to that year only I will get the number of 0 people killed by human drivers.
I disagree, I saw such an awful number of bugs in ML codes going with papers that I now take for granted that there is a bug somewhere and hope that it does not impact the concept being evaluated.
(here having everyone use python, a language that will try its best to run what you throw at him, feels like a very bad idea)
If you know any good paper that tries a novel approach and doesn't just recycle the old SSIM+pyramid loss, please post the title or DOI :)
if it was the case self driving cars wouldn't be on the road, I don't think we should aim for perfection, perfection will come. We should be looking for cars that make less errors on average than humans, once you have that you can start putting cars on the road and use data from the fleet to correct remaining errors.
There's just something dubious about how it seems like you consistently find mistakes and problems in these papers. I'd be stunned if there was any expert that wasn't aware of the shortcomings of using a kernel that's as small as 3x3.
Another thing, just the time alone needed to evaluate technology the way they are talking about sounds quite staggering.
I don't find this to be the case with most ML researchers. Is it possible you have misunderstood some of these papers? It is, after all, hard to jump straight into a new field.
> The second paper converted float to bool and then tried to use the gradient for training.
This sounds like binarized neural networks. If that's the case, they keep the activation before binarization to use for backpropegation.
> The third paper only used a 3x3 pixel neighborhood for learning long-distance moves.
A single layer of 3x3 convolutions would not be able to model long-distance moves. But I have not read a single paper where they have only used one layer. Is it possible they stacked multiple conv + pooling layers? The receptive field of each unit higher up in the stack grows pretty large in the end.
For instance, I was driving on autopilot on a section of 101 where they repainted the lanes. I let autopilot do its thing...but I closely observed and kept my hand on the wheel and foot on the brakes. Lo and behold...the car positioned itself right in the shoulder and was driving towards no-man's land. If someone wasn't paying attention in this situation, it would have been catastrophic.
I can't think of any time in my life where I've almost driven my car into a no-man's land even after thousands of hours of driving, and yet every Tesla owner I know has half a dozen stories about the time Auto Pilot tried to steer them into a barrier or took a corner way too fast or suddenly braked for no reason.
I totally get the appeal of a Tesla, holy shit that acceleration is amazing, but Auto Pilot just does not seem worth it at all.
Whereas once you feel that all agency is stripped from you, it's all someone else's fault, especially if the mistakes feel alien (as in "no human would err in this specific manner").
I try new technologies all the time to get a sense of where it is in coming to fruition. I think what you mean is, you’re more skeptical of claims.
The same way you claim you can’t learn anything about NY in your bathroom, you don’t know anything about Tesla or Self-driving if you haven’t tried it. You should at least test drive it under controlled conditions where you feel comfortable before closing yourself off completely.
Who cares what the car does under controlled conditions? I'm sure the manufacturer did exactly that in their testing. Even when they test on public roads, there's a hands-off safety driver behind the wheel, who is paid to be on the lookout and sufficiently alert to take over in case of an unexpected excursion. (Unless the self-driving car under test is from Uber, in which case the safety driver simply watches video on their phone. Too soon?)
This is nowhere near how these cars are used in the real world. The real world is not a set of "controlled conditions", so any comfort one builds up in such a situation is merely a false sense of security.
> [...] where you feel comfortable before closing yourself off completely.
So, here's the thing: I'm comfortable driving myself. I don't get distracted, I use good judgment, I consistently prioritize the safety of my vehicle's occupants over everything else. I know exactly how flawed self-driving cars are, and how far behind the curve of my driving skill they will remain within my lifespan. That's the sum total of everything I need to know, and no amount of "controlled conditions" demos will change my mind.
P.S.: If you're from the future and you're reading this because I got mowed down by a self-driving car: ha ha! Joke's on me.
Maybe not today, not tommorrow, but maybe six months in the future when the weather and road conditions happen to be just precisely right to confuse the system at just the most dangerous time.
In the meantime I will just read/watch the stories of people more trusting than me about how well the technology works, and currently those stories don't fill me with confidence.
IMO, this is currently dangerous technology that should not be allowed on the road at all.
Common-sense tells me that these half-self-driving systems are dangerous.
I would like to see a study that tested the reaction times of a person who sits doing nothing for a hour and then is suddenly expected to take evasive maneuvers, versus a sober - or even a drunk - driver who is actually driving the car continuously.
Then have an opinion, otherwise it’s like reading about NYC and saying you hate it because you read the reviews.
Of course I can have an opinion without going for a ride in one, and that opinion is that I don't trust it and I won't "experience it".
It's far more janky and susceptible to confusion than Tesla makes it out to be in its marketing, and the reality is that people simply do not pay as much attention as they are required to when using it because Tesla has convinced them it's magic that's safer than anything else on the road.
I have a Tesla, I usually use autopilot for highway traffic only, summon it like a valet to where I am in my parking lot, and not have to idle in hot and cold weather.
I agree, I wouldn’t use it for local roads and unclear highways, but isn’t this what they tell you? I don’t think they ever tell you that it’s full self driving right now. Also, I’ve experienced it being janky but over time it’s improved dramatically.
As a car and efficiency enthusiast, I totally try to keep my gasoline powered car from idling unnecessarily.
But what does "idling" mean in the context of a 100% electric car?
It can drive by itself from where it's parked to where you are?
From your description, I was thinking more of something like waiting at the entrance to a public carpark and the car comes to you.
Also, I'm quick to reach for a simple pneumatic cylinder to solve a problem. Perfectly capable of using new electric-servo-ballscrew-hotness to do a similar move, but the value provided by tried-and-tested systems is hard to overstate.
Probably the production versions of those models are suboptimal in different ways but work better in practice...
Can't find it now but there was a poignant quote or anecdote I read the other day that expresses this exact sentiment - the more you know about technology, the less likely you are to use it. I think it was in the context of e.g. smart homes and voice assistants or online tracking - if you're aware of how much data they hoover up and what can be done with that, you'd be Very Afraid.
Tech Enthusiasts: Everything in my house is wired to the Internet of Things! I control it all from my smartphone! My smart-house is bluetooth enabled and I can give it voice commands via alexa! I love the future!
Programmers / Engineers: The most recent piece of technology I own is a printer from 2004 and I keep a loaded gun ready to shoot it if it ever makes an unexpected noise.
https://biggaybunny.tumblr.com/post/166787080920/tech-enthus... (via foxrob92)
I would never call myself an engineer, although I have worked in environments where there were lots of engineers (not in the software world) and one definition of "engineering" that I heard was:
"Meeting the requirements while doing as little new as possible"
Let's say that you can go to work by feet or by bike.
Let's also say that by feet your probability of getting killed are 1 in 1,000,000, and the commute takes 20 minutes.
On a bike, commuting takes only 10 minutes, but the probability is up to 1 in 200,000. Five times more.
You end up deciding to use the bike every day to go to work.
In this example, you decided to trade comfort (faster commute) with a slightly higher probability to end up dead.
Imagine now you need to decide whether to commute in your Tesla with or without autopilot.
Let's assume (I might be wrong) that Tesla's autopilot increases your chances of getting killed. (for simplicity, let's ignore the consequences for other people on the road).
Would you still trade comfort (not needing to drive) with a slightly higher probability to die?
In other words, as a buyer, I do care about the occupant safety rating, you are correct in that. As a road user, I also care that other people's cars consider me as irrelevant, my potential death an externality to be amortized in the purchase prize.
In my eyes self-driving should still be called driving-assistance.
So far there isn't any evidence for this assumption.
When a self-driving car is involved in an accident, it's in the news all over the world.
Human drivers kill or get killed every day, in every country.
Tesla was criticized quite a bit at one point for comparing deaths per Autopilot mile to deaths per all motor vehicle miles. This was a bad comparison because motor vehicles included motorcycles, as well as older, poorly-maintained cars, etc.
Then Tesla released a comparison between Autopilot miles in Teslas and human-driven miles in Teslas where Autopilot was eligible to be engaged. This felt like a much more fair comparison, but Teslas are lenient about where Autopilot can be engaged - just because the car will allow it doesn't mean many people would choose to do so in that location, so there might be some bias towards "easier" locations where Autopilot is actually engaged. There's also the potential issue of Autopilot disengaging, and then being in an accident shortly afterwards.
This is morbid, but I also wonder about the number of suicides by car that are included in the overall auto fatality statistics. If someone has decided to end their life, a car might be the most convenient way (and it might appear accidental after the fact). That would drive up the deaths-per-mile stat for human drivers, but makes it tougher for me to decide which is safer - Autopilot driving or me driving?
Unless you're talking about getting rid of the steering wheel and deploying the current system as Level 5. In that case, yes, interventions should count against it.
So, the hard failure for humans is pretty bad, too, just different. I suspect there’s little overlap on a Venn diagram of the hard failure modes for AI and humans.
Citation needed, as far as I know this is not true at all.
"In the 4th quarter, we registered one accident for every 3.07 million miles driven in which drivers had Autopilot engaged. For those driving without Autopilot but with our active safety features, we registered one accident for every 2.10 million miles driven. For those driving without Autopilot and without our active safety features, we registered one accident for every 1.64 million miles driven. By comparison, NHTSA’s most recent data shows that in the United States there is an automobile crash every 479,000 miles."
Yes, people disengage auto-pilot (or it does so itself), but it is at least plausible to say that in this like-for-like comparison of drivers and vehicles self-driving is at least comparable in safety.
I don't know what disengagement rate is for Tesla, but we do know that early 2019 waymo it was like roughly every 11k miles driven. Given that waymo uses lidar/generally considered to be closer to actual autonomy than Tesla this speaks poorly of actual autopilot safety.
 - https://9to5google.com/2019/02/13/waymo-self-driving-disenga...
I think every human would slow down if they see things that they cannot explain. An AI will not.
It's basically the same problem as when an image recognition AI is 99% sure that the photo of your cat shows guacamole.
Current AIs do not have a concept of absolute confidence, they only produce an estimate relative to the other possibilities that were available during training. That's why fully novel situations produce completely random results.
Elaine Herzberg was in dark clothing crossing a dark street well away from any crosswalk or street lighting. Would a human driver have performed better? From the footage I saw she was nearly invisible, I would have hit her too.
This was not a hard fail for the AI.
(Or, more charitably, "oops, somebody forgot that object persistence is a thing" does not excuse the result)
It’s clear that a) the road is well lit and b) visibility is far, far better than the Uber video would suggest.
An ordinary human driver would have seen Elaine & taken evasive action. This death is absolutely Uber’s responsibility.
Looks like this was a hard fail for the AI then. We can say with better than 90% certainty that a human would have saved the situation, probably would have stopped or avoided easily. My mistake.
We can reasonably assume that pja is aware of the existence of abysmal drivers and fatal crashes that should not have happened. I doubt their intent was for "would" to be interpreted as "100%".
Just like LIDAR would have picked up https://www.extremetech.com/extreme/297901-ntsb-autopilot-de...
And just like LIDAR would have picked up https://youtu.be/-2ml6sjk_8c?t=17
And just like LIDAR would have picked up https://youtu.be/fKyUqZDYwrU
And just like LIDAR would have picked up https://www.bbc.co.uk/news/technology-50713716
These accidents are 100% due to the decision to use a janky vision system to avoid spending $2000 on lidar; and that janky vision system failing.
The car had LIDAR.
"the car detected the pedestrian as early as 6 seconds before the crash"
"Uber told the NTSB that “emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior,” in other words, to ensure a smooth ride. “The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.”" 
'Oh well it was very dark' is not a factor in the crash that killed Herzberg
There are many other cheaper LIDARs, even in the Velodyne lineup, but they are less capable.
The vehicle was equipped with both radar and lidar. The victim was detected as an unknown object six seconds prior to impact, and at 1.3 seconds prior to impact when the victim entered the road the system determined that emergency braking by the driver was required, however the (distracted) driver was not alerted of this.
Why would the system notify the driver that emergency braking was required instead of simply braking?
Just adding this if you think we should somehow give Uber the benefit of the doubt here. They released footage from a pinhole dashcam sensor that is not used by the system, knowing fully well it would be pitch black and send the ignorant masses into a "she came out of nowhere!" chant.
With the sensor array available to it the car should have done better, no question.
But to make the claim "fails harder" I would be looking for a clear cut situation where a human would almost definitely have outperformed the AI.
Human eyes do have miraculous dynamic range so we would likely see more. Can we say with 90% certainty that a human would have saved the situation?
Yes, given the misleading nature of the dashcam video I think we can. This was not a pitch dark road lit only by headlights where an obstacle "appeared out of no-where". This was a well-lit main street, with good visibility in all directions. An ordinary human driver would have had no problem identifying Elaine as a hazard and taking the appropriate avoiding action, which was simply to slow down sufficiently to allow her to cross the road.
The backup driver in the car was apparently looking at their phone or some other device and not watching the road at the time.
"According to Uber, the developmental self-driving system relies on an attentive operator to intervene if the system fails to perform appropriately during testing. In addition, the operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review"
She was looking at a device, yes, but not her phone.
Uber put one person on a two person job, with predictable results.
The rest of my point still stands though.
Moreover, try this dashcam video:
It's 10 seconds long, and makes the pedestrian look almost invisible except for the soles.
However, when I took that video, both the crossing pedestrians were clearly visible, not vague shapes that you only notice when you're told they exist. So much for video feed fidelity.
Those failures include driving straight into barriers.
Other systems like OpenPilot show the same.
When it fails you better take control of the wheel or you will crash hard.
These are high risk areas, if autopilot is "failing hard" with a regularity equal to or higher than normal than this would be good to demonstrate with stats. Guessing Tesla doesn't really release that info?
Still seems like people treat auto-pilot like auto-drive and die as a result.
Thats not a tech fail imho
Going through that experience has 1000% made me more weary of autonomous vehicles.
I’ve been similarly untrusting of a lot of “high tech” approaches to various things, and I derive a lot of joy from products/services/etc that take a “back to basics” or at least minimally- or non-digital philosophy. In particular: I have an affinity for automatic watches and carbureted motorcycles.
If nothing else, it’s a bit of a break from what feels like a constant struggle to keep all the gears turning at work.
BUT... I’ll confess I’m also a sucker for innovations / the occasional new hotness. I recently upgraded a Kawasaki KLR650 (a competent but... “well tested”, shall we say? motorcycle) in favor of KTM’s top-of-the-line adventure bike. The technology difference between the two (despite only 3 model years between them) is incredible: the latter adds 5x the power, ride-by-wire, cruise control, lean-sensitive ABS/traction control, an up/down-capable quickshifter, probably a thousand other improvements.
One day, about 1100 miles into owning the new bike, the dash pops up a low tire pressure warning from the tire pressure monitoring system. It showed the rear pressure was fairly low, and sure enough, I’d picked up a small screw between the treads.
Certainly a TPMS is nothing compared to anything self-driving, but honestly it was a bit of a wake-up call — I WANT systems on my bike to increase my safety level.
I’m not really sure what the lesson is here. Maybe “Look for the middle grounds (the ABS/TPMS-maturity systems) between ancient technology (anything on my beloved KLR) and bleeding edge (non-replicable papers on self-driving cars)”? Seems like this holds up ok, especially as a consumer of those techs... But maybe not for the innovators?
I'm in the same boat but unfortunately you have to share the road with these things so it's hard to completely avoid them. I do find myself avoiding Tesla's on the freeway more and more.
However, safe self driving is coming, slowly but surely (I work at a company which produces tech for these guys). The hyped companies are in trouble, but car OEM's, partnered with companies you've never heard of, are making slow, steady progress, all the while being subject to government functional safety requirements, particularly in the US and EU. There is zero chance that a Tesla or GM car will be allowed to fully self drive, so no matter how advanced these systems are, they're sold as level 2 systems requiring driver oversight and qualify as cruise control in regulations.
Today, we have full autonomy in some truck routes (only as proofs of concept), in ship yards, parking shuttles, mining equipment, quarry trucks, etc, places where the problem domain is more constrained. Generalized self driving is a ways off, but by the time you can make use of it, it will be safe, it just won't come from Tesla or Cruise or Uber.
But for other things like the original iPhone, sometimes new tech is just better than whats out their even if there will always been some flaws in the first versions
Also, I was reviewing their public source code release and there was no approximation. That part of their loss function had simply never worked, but they had not noticed prior to the release of their paper.
And due to them training slightly different from what they described in the paper, the AI worked competitively well nonetheless.
However, by saying "AI is stochastical gradient descent optimization" you're equating AI with Machine Learning/Deep Learning.
The list of AI technologies also includes things like Artificial Life, Genetic Algorithms, Biological Systems Modeling, and Semantic Reasoning.
I suspect that when we get true AI, i.e. Strong AI or Artificial General Intelligence, we will achieve it through a combination of these techniques made to work together.
There is a saying where all software (security) engineers dont have IOT or any "Smart" devices in their home.
Why don’t we work on drones that pick us up and take us places to really leap ahead, get out of traffic, and do something amazing?
Cars driving themselves? How incremental.
ML is only one part of 'AI'
These requirements can be easily fulfilled with a well designed paper ballot system. I don't see any chance of doing the same with anything computerized.
But we don't apply this rule to the votes where bribery and coercion are most practical to start with, where there are a small number of voters that can be intensely targeted, and swinging a small number is sufficient to decide major outcomes.
But then I went into consulting and saw that big companies have teams of lawyers that settle proactively out of court to keep inconvenient truths out of the public opinion.
Like the first few exploding iPhones.
(Just an example, never worked there)
BTW, did you hear about the Uber crash where their AI couldn't track a pedestrian and then killed her?
That being said, I’m a huge skeptic of the current state of self driving cars. I would have assumed these systems use LIDAR as well as vision and could have at least slammed on the brakes.
A vehicle traveling 43 mph (69 km/h) can generally stop within 89 feet (27 m) once the brakes are applied.
The police explanation of Herzberg's path meant she had already crossed two lanes of traffic before she was struck by the autonomous vehicle.
I was also misled by the poorly exposed "official" video. Given the numbers above there was time for a human driver to see her and even come to a complete stop. Further since she was moving from one side of the road to the other and only entered directly into the vehicle's path in the last 1.3 seconds (image in "Software issues" section of wikipedia article) it is likely that all that would have been needed to avoid the collision would have been a minor slow down and she would have completed her crossing safely.
Exactly my thoughts when reading about the blended wing aeroplane, yesterday.