The example video showed so many small things that we take for granted as a human driver that would need to be built in or hand coded that AI or neural nets would never be able to get. There just isn’t a way that you can train every single little thing that could cause an accident. For some things you just need experience and fear which I imagine can’t be modeled via any current AI.
Humans have such incredible pattern recognition that we have built-in fear mechanics for when things are _not_ a pattern we recognize.
What percentage of the time do you experience fear when performing a task such as driving? Think about it - that's the margin for error that humans have in pattern recognition for that task.
I drove to work yesterday and a driver driving over the freshly painted lines to avoid a parked truck had basically caused a swerving lighter set of the lines in the middle of the road to appear.
Similarly in my country at construction sites where the road is redirected for a while the new lines are often just in a different colour and the old ones remain. Sometimes they aren't and i've gotten confused at least once.
But what if it was intentional or more pronounced? Perhaps we really should paint ourselves a looney tunes-esque scenario. The kind where our hero paints some lines towards and a fake tunnel on a wall. Except instead of some by radar easily recognised wall it should be some other danger.
Also. If I put a cardboard cutout of a cartoon character next to or on a road will it cause these cars to slow down? Will it take a risk to swerve if the cutout appeared around a corner?
Recognition performance matters.
I know there is a lot of work being done in this areas as well but what is the current state of it and what are the barriers?
Roughly the same idea with freight - a single truck driver serves the needs of many people.
That said, it's not at all clear to me that developing autonomous trucks for long interstate routes would be meaningfully cheaper than laying rail on literally every high-traffic route on the planet, which uses perfectly-understood technology and has entirely predictable returns.
Autonomous driving in good conditions on highways is already working quite well, but once it's wet/foggy and there are animals wandering onto the roads and other cars are behaving unpredictably... those are not straightforward problems to solve.
Also, for the unfamiliar, 'guard' here does not mean an armed guard.
To be clear, the conversation also isn't about autonomous driving for individually-owned cars. Tesla's disasters aside, its about autonomous taxi services.
Both would be valuable - but a truck that is 30% more fuel efficient is probably going to save more money than one that doesn't need a driver (but still needs people more loading).
There are different estimates everywhere - but I usually see 20-30% of trucking costs are drivers. In other freight industries (cargo ships, trains, etc.) it would obviously be less.
So this is all about scale. Everyone is obsessing over Uber for cultural and political reasons, but there are 1/10 as many taxi drivers as truck drivers, and they earn half as much. The big money is in trucking, and even if the problem is only half solved (say just for some long-haul routes), it would be worth hundreds of billions of dollars, even if truck driver wages are not the single most expensive cost. Hell they could be the 10th or 20th most expensive, what matters is the billions that could be saved, not the rank ordering.
1) The system still has to react safely to pedestrians, cyclists and other entities outside of the network.
1.1) During the transition period, that is going to include vehicles driven by people (I do not think it will be feasible to ban the latter before self-driving vehicles are commonplace, except possibly in city centers and on some major highways.)
2) We do not want systems to learn how to take advantage of the protocol to drive aggressively and antisocially.
I am sure there is a technically sound protocol someone can come up with, but it would have to survive the corporate food chain, proprietary advances, and the three letter agencies.
They do this in LA already, you can choose to be toll'd more (wireless tolls) or you can stay in the normal lanes. I'm much more ok with the regional infrastructure charging it because you have at least a chance at knowing where the money goes.
The car manufacturers would love to have some sort of residual billing for crap like that I'm sure tho.
The fact that it doesn't currently happen is a good sign, but as corporations grab hold of society in more ways I don't see why a premium car company wouldn't try to differentiate with those kinds of benefits. As cars become more like a service provided than an object you own, it makes sense that those kinds of things would spice up a particular "package deal" for your car service.
But I'm also waiting for fake street signs to troll/manipulate self driving vehicles :)
Can I copyright the phrase “Tesla swatting”? :)
Also there is the reverse: people stealing signs (https://news.ycombinator.com/item?id=25223633) or putting stickers on them (or snow, mud, ...) where most humans know what to expect (stop sign, yield, ... have specific shape for a purpose; empty frame where city starts; ...)
Going out of a somewhat controlled small area in SV/SFO or AZ is a huge task. Going to a different country requires complete re-learning of the models ... and then there are humans ...
No human would think that number next to person is speed limit, but machines aren't trained for that...
> Situational awareness or situation awareness (SA) is the perception of environmental elements and events with respect to time or space, the comprehension of their meaning, and the projection of their future status
There's an bearish case to be made - not only are the state-of-the-art algorithms nowhere near this - it takes kids quite some time to acquire this skill. Playing ball games with my kids, I've noticed even a 3yo still doesn't intuitively grasp Newtonian mechanics - they're trying to catch the ball where it is now, not where it's going to be in a few seconds. 8yo - notably better, though not fine tuned yet. Heck, I know some adults who're lacking in this respect.
I've noticed over the years that the morning commute is a symphony of madness. Drivers going over the speed limit. Drivers doing illegial things in order to get to work, and keep the traffic moving.
Cops seem to let a lot of infractions go during the morning commute. (The rest of the time they let nothing go, but in the morning commute they just seem to know it's a symphony of driving?)
I picture a self driving car going 25 mph with 20 vechicles behind it.
I don't think you could legally program a car to break any law, even in emergencies. Well you could, but lawyers would have a fun day in court?
It's as simple as that.
Putting aside the question of whether the tech is there or not (and I would argue it isn’t there for commercial consumer use), the infrastructure and the laws will take years to catch up. This isn’t an overnight problem and many of the barriers are things that tech can’t just design around.
I’m personally of the opinion we are 10 years away or more technically. Main issue is weather and the risk threshold people can tolerate. All the tests thus far are in environments are high sun, no rain environments. If we are already localizing to southern cities, we may as well just add fiducial markers and guide wires.
In terms of risk threshold, look at COVID, risk of death under 45 is something like 0.5%. Or the black unarmed homicide count (<30). People have no way of measuring risk, just a few fatalities would have the public in a panic.
I don't think I ever tried harder in my life not to LOL.
Steve McConnell also talks about a common estimation pattern: managers keep asking if something can be sooner, and are only stopped when some engineer proves that a date is 100% impossible. Normally that means that the launch date starts out with a 1% chance of success. Clearly it was lower than that here, but I wonder if that's because a) it's very hard to prove what's impossible here, or b) the Uber/Tesla hype bubble was so intense that it didn't matter what the nerds said.
They hype got ahead of the tech.
I wonder if they'll have to learn to drive, and make a driver's license?
It could be that by the time they come of age, there are self-driving cars, but they're much more expensive than making their license (which in Germany costs around 2k Euro, including lessons).
Or maybe there'll be no viable self-driving, fully autonomous cars by then? Who knows?
I don't know of anything important that's running on anything that can be reasonably called AI.
I think we can get by without Siri.
Of course, if you include the entirety of machine learning under the AI label, then whatever. But in my opinion, a principal component analysis does not an AI make. Statistical inference has been around since before computers were invented.
Meanwhile, you've got marketers that are also trying to use that definition, in order to sell you on some software solution, despite it being oxymoronic.
Two better definitions of AI that I'm a fan of are "Algorithms that model intelligence" (where "intelligence" is the ability to use knowedge in inference), or the broader definition of "Second-order algorithms" or algorithms that, rather than encoding a process that finds a solution, encode a process for finding a process that presents a solution.
My objection to the parent comment still stands; our world is not running on that.
For example, Wolfram Alpha, or OpenAI's algorithms that can learn how to play Super Mario will fit the definitions you provided, but our present world does not depend on systems like that.
I for one am highly skeptical that the kind of automatic personalized targeting that AI enables is actually worth the price, but the industry overwhelmingly believes it is. So, removing AI from the picture would be considered to greatly diminish the value of advertising on the web per industry pricing, and with the ad money, much of the web would dry off.
For one, I think it's silly to apply that moniker to something that's not at least interactive.
Machine Learning / Data Mining pipelines that I've touched were running nightly, and furthermore, the systems they powered could continue functioning even if the pipelines crashed / weren't running.
Because with ML, both the input and output are data (information, at best). ML doesn't interact with commands.
The whole point if "A" in AI is that it's contrasted with Human Intelligence. Nightly batch processes simply ain't it.
Sure, people will still slap that label on anything, but it's boring to do so in the context of this discussion :)
Waymo is now offering rides in Phoenix AZ with no "safety driver". "Waymo One is our fully public, fully autonomous ride hailing service. Now anyone can take fully autonomous rides anytime they're in Metro Phoenix. Just download the app and ride right away."
Waymo also now has significant non-Alphabet investors, having raised US $2 billion in 2020.
No pricing yet, so this is still a demo.
If you’re referring to the Waymo rides in Chandler/Phoenix, they do charge. JJRicks (a YouTuber with no affiliation to Waymo AFAIK) has a spreadsheet of the rides he’s taken.
As an example, I was surprised to learn that Toyota designed their own ECU chips rather than buying off-the-shelf automotive grade MCUs until 2019, when they spun that business off to a subsidiary. They really take vertical integration seriously.
One thing I wish any car manufacturer would do is create a system that helps their users use the turn indicator. It’s an endemic problem here.
Something like an annoying voice telling them that they should have used it immediately after turning. And if they fail to use it in three turns during the same drive they get a mandatory safety briefing before they can lock the car.
You wanna write tickets post up a real meatbag cop and make them write tickets. Don't create some dystopian dragnet that you can tune for revenue generation.
ZOOX is another company I'm watching https://en.wikipedia.org/wiki/Zoox_(company)
Toyota's entire market is convincing people (the bulk of whom will trade in in a few years) that a Camry/4Runner/Sienna is worth substantially more than an Altima/Tahoe/Pacifica based on a bunch of promises about reliability after 150k that the statistical first owner will never see (spare us all the anecdote about your relative who trades in at 300k on the dot, I have one of those too). If robotaxi fleets become the game then Toyota is gonna lose because commercial buyers who buy us a fleet of whatever has the lowest all things included TCO over 3/5/10yr tend to buy a hell of a lot of Chrysler 200s, Nissan Sentras, Chevy Colorados, Chrysler Pacificas, Ford Transit Connects, and other not so premium vehicles.
If FSD tech is viable in the next 20yr I'd be very surprised if the reality of what that will do to Toyota's business doesn't keep them from going all in enough to bring a viable version to market early enough to get a good market share at it.
Otherwise, it's true that new cars from many different brands are reliable enough nowadays, even brands that used to be bad like Hyundai (unless you want to buy a car and keep it forever). But used car sales far outstrip new car sales.
You never see a Sienna getting loaded with bags of concrete at Home Depot. Can't say that about a Town and Country. You never see a family of 5 pile out of a (newish) Corrla. Can't say that about an Altima. You never see a 4Runner towing something way too big. Can't say that about a Tahoe.
Toyotas sell for a premium used because they get sold to "premium" customers who keep them nice.
If Toyota financed the people Chryler and Nissan would finance (insert neggity joke) then in 20yr you would see all the upper middle class HN types shitting all over Toyota because Toyota would be the poor people car. You'd see them in all sorts of states of disrepair and neglect and they would be stereotypes as unreliable, after all why else would they be driving around on 3/4cyl?
I can't believe I'm defending Chrysler and Nissan but the blind Toyota worship is just that, blind worship. Most of their niceness comes from people keeping them nice. The real world edge between manufacturers is very, very slim.
In end of life they are no better or worse than anything else that's been treated the way they have (VW's regularly scheduled electrical fires notwithstanding)
It's because they are well engineered.
I'd expect the Toyota methodology to trickle down.
But brands that start with T and end with A can do no wrong on the white collar parts of the internet so Toyota gets a pass.
Last I checked Nissan and Mitsubishi (though they're on the up and up lately) were Japanese. The people of the Camry tax brackets don't exactly hold them in high regard.
1. They get desperate people to buy specific four door vechicles, and don't even have to worrry maintance costs.
1.5 Driver pays for fuel.
2. They don't pay for insurance.
3. They don't have to pay for liability.
4. They pay a Independant Contractor a percent of sales.
5. Most countries have a burgeoning sector of low skill help whom that are begrudgingly copacetic with being exploited. (This I find this very sad, especially in a country I used to love. For decades there wasen't a lot of takers for shit jobs. Now there's a line.)
When a cheaper less reliable car is in the garage being repaired it's not just about the cost of repair but also the opportunity cost of lost revenue.
Pretty much every minivan fleet in the US runs on Pacificas and Transit connects. Small trucks are always the Colorado.
Of course the cabbies love their Priuses but they only begrudgingly switched after the supply of ex-cop crown vics dried up.
Regardless, there is no other car company doing anything like the Mirai.
A roadway can be MUCH smarter than a car. Each segment of smart road is unique, and only has to worry about itself and whatever cars are present in the area. Smart roads could be managed by a few operators (in fact one operator could shepherd many road segments).
Each smart car has to be able to account for all possible kinds of roads. Smart cars each require an operator who is essentially just sitting there waiting for the car to fail (and therefore the failures become more catastrophic as the tech gets better, since trust rises and attention spans fail).
Put the tech into the ROADS, and just let the cars listen.
A few years back, during the peak of the self-driving hype cycle, I considered doing exactly this. I think with a small team of sensor and Bayes filter experts, a car outfitted with expensive but off-the-shelf sensors, and about a year of dev time, you could get a car that self pilots 95% of the time on a wide suburban road in clear conditions. That extra 5%, other road types, and worse visibility conditions would each take you far more effort than that. But if your goal was just to show results fast and get acquired, it'd be easy to convince people you were almost there.
> A roadway can be MUCH smarter than a car.
If we're building specialized roadways, you can steer a vehicle with two strips of metal tied together by wooden boards. And it can travel twice as fast as a car while holding a hundred times the passengers. And it was invented 200 years ago.
Amen to that. It just shows how crappy our economy is that we don't have trains, pneumatic tubes, and other old school technology that hasn't been beat everywhere.
The cost of replacing the entire US road system with "smart" roads would doubtless be astronomical - as well as increase the maintenance and infrastructural requirements. We've probably had the technical capabilities to create self-driving-capable roads for years, but the costs are probably too high for people to take it seriously "at scale."
Machines that do things "as well" as humans in similar roles have been a science fiction staple for over a century. That machines do focused tasks much, much better than humans haven't stopped people from searching for "drop in" replacements for tasks that currently use a human. Suddenly, the scale is on the level of a single vehicle - very manageable if it works. That's why people were so willing to spend billions of dollars to avoid spending trillions on all that road replacement - because they wanted the worlds they saw in science fiction.
Imagine the investors' horror of building smart roads a few years before someone figures out a pure software solution. It's hard to be sure that it won't happen.
I'm generalising but I hope you get what I'm saying.
To that extent, was this acquisition actually a liquidity event for the employees or just an effective "change of ownership"?
"In December 2020, Lyft announced that it will launch a multi-city U.S. robotaxi service in 2023 with Motional."
Why? Because a truck does 500 miles per day. A self driving truck can do 1000. Amazon cannot exist without long haul trucking
(slightly out of date, and by value):
If (just to make the math easy) you need to pay a robot a third of what you need to pay a human, I wonder whether the negotiation would come out closer to
1. One part Uber, one part driver (so Uber's cut maintains its value) or
2. Still one part Uber, three parts driver, but the values are all lower and the total volumes higher.
I'd think it would need to shake out closer to the latter case than the former to keep Uber in the game, for a few reasons:
- Self-driving car companies are software companies. They can make Uber clones, given enough resources and time.
- There isn't really a two-sided marketplace bootstrapping process. These companies have enough capital to put a lot of cars on the road, and I don't think the economies of scale are actually that big -- there isn't much cross-benefit from scale in disconnected cities, and inside a given city you need 4x as many cars to halve wait times so the scale benefits tail off sharply. And I bet there's little brand-loyalty.
- If there are few self-driving car companies in a market, they'd have relatively large market power, so they could dictate rates more effectively than an atomised driver pool.
Shrugs, it could happen if the rates were low enough, but I suspect it won't. But I'm probably greatly underestimating the benefits of partnership -- dispatching is probably easy, and AV companies will necessarily be savvy with regulators, but customer-interaction and "app surface-area" are likely places where the existing players have a real leg-up. That said, the labour market is pretty porous in the Bay Area...
~30 billion R&D
1:10 remote safety operator ratio (1p employees, not contractors).
100K Bill of materials/vehicle for the lidar sensors etc.
Compare this with an externally owned 8 year old vehicle being driven by someone who doesn't confer liability to the company.
Even if the dream was realized there likely isn't a viable business for robotaxis until the cars are on the road for 10-20 years.
People here like to whine that this would be recreating the bus, not realizing that the reason buses suck is because they have rigid departure schedules, and fixed startpoints and endpoints. People mainly take Ubers over the bus because they're much more flexible.
I think this is why only Waymo or Cruise or one of the other well-funded-by-profitable-other-business will continue to progress towards the true self-driving vehicle... its just too damn expensive and uncertain as to when the breakthrough/development will happen
1/ It is profitable/break even to the best drivers (this means you have to optimize and consider the external costs of doing the work... which unsurprisingly most people are bad at doing so). There are certainly cases of people who have leased a brand new car, with poor gas mileage, and then drive them into the ground surprised it hasn't paid off. However, using a Prius or all-electric vehicle with high reliability results in lower costs. If you run a business and poorly optimize around your costs you don't make profit...
2/ Drivers do drive for years for them, but the vast majority do not because it is not a great long term career and usually serves as a bridge of full-time work or regular part-time work. You can't get promoted, you can't get paid much more for tenure, and you don't pick up any new marketable skills. Even a career warehouse worker or McDonald's front-line worker has a path upwards...and people do work there for short and long periods of time. Just because everyone isn't a long term employee doesn't make a profession unsustainable...its just the nature of the job.
The last statement is pure conjecture so I'm not sure how to address it outside of the fact that in an urban setting car ownership comes with a lot of costs and these services expand the radius of people's ability to navigate cities. There will be a premium for that and Lyft and Uber are working their way towards providing that service AND generating a profit.
What does training look like? Do self-driving cars only work in certain countries where they have been trained?
I work in this space and am not aware of what you're describing. M/L calibration is tuned where it's legally allowed to run on the streets as well as where there's interesting problems to practice against (bridges, tunnels, etc.)
The same way how human learn/train.
Observe how most people behave then imitate.
That's a lie, though. The meme that México is an uncivilized place is dumb and doesn't hold much substance.
Edit: Downvote me all you want, GP's statement is laughably easy to disprove. ¯\_(ツ)_/¯
Blinkers can mean something different at different times, depending on context, while traffic lights and signs are treated as a suggestion. Speed limits and their enforcement are non-existent (outside of expensive toll road), and lane lines aren't something that have meaning. Pedestrian traffic (or hell, animal and farm traffic) is unexpected and unexplainable, as are public transportation options (imagine giant school buses flying through roads, playing leapfrog to get in front of one another to be able to pick up the next people waiting for a ride).
The consequence of these differences for self driving cars will be a very, very difficult problem to solve unless the majority of the vehicles are self driving, which is not a solution that will happen anytime soon in Latin America.
I go (or used to) Guatemala a lot and the Camionetas are so insanely scary! Going over huge mountains, roads with giant holes in them, tipping this way and that, filled 3x per seat or omre with mounds of carrots and cargo on top. lots of times the money guy hangs out the front entrance while it's driving!
They go SO fast it's super scary to me I'm surprised I haven't seen one tip over on those corners or bust a brake and run off the mountain.
There was more mayhem, but that one is illustrative.
In my experience, the laws are not as closely adhered to as I have seen in other places.
Like, so chaotic that I refuse to do it different. I leave the driving to the locals when I'm there.
I don't think anyone's saying Mexico is uncivilized. It's a beautiful place with beautiful people, massive cultural output, great universities, etc.
I'm just not going to invest in a startup trying to build self-driving cars for Latin America anytime soon.
You can describe humans in exactly the same way.
In theory at least, it can just be some algorithm and some screen space on the app.
- Ideally in a shared ride what happens is that instead of 2 drivers, driving 12 miles for passenger A and 14 miles for passenger B, you have 1 driver who drives 15 miles for both passengers trips.
- So 1 driver to pay who is now more efficient, and 2 paying customers. You charge each customer X% less, pay the driver Y% more, and theoretically you could keep your margin the same but now fulfill more rides (another driver is free now that you put two rides in 1 car)
- However, now let's consider how much cheaper it can really be...
- Sharing a ride for a cheaper cost makes sense when you and the person you share with have a generally overlapping route. The discount you get as a customer is a function of how likely you are to get matched with someone.
- Turns out there aren't a lot of rides with good overlap (airport rides might be the best type of ride tbh). Thus the discount is quite small. If the discount is small it means you have less people using it. Less people using it makes the discount even smaller! Eventually you have no discount and no incentive to use the service.
- To keep users incentivized to use the shared mode, Lyft and Uber have to subsidize the pricing to make sure that match rate stays high. Every "shared" mode ride that has only 1 person in it is a big loss, but incentivizing more people to use it can result in a smaller net loss across the marketplace
Which one do you think is more realistic, staying L2 by EOY 2021, or magically leaping forward to L5 FSD in 8 months?
The only company working on self-driving that I believe when they issue press releases is Waymo, because Google isn’t trying to juice their stock price all the damn time, and they have operable robotaxis in AZ. I don’t think Waymo claims L5 capability either.
Figure out 99.9% of driving, but otherwise take a family off a bridge when the sun is blinding the camera? Still need a steering wheel.
The person he was talking to said they were talking like normal and suddenly the phone cut off.
Computers probably can not be worse that the people already on the road.
The psychological, ethical and legal implications are completely different. If tomorrow I drive a car and run a kid over then I'll be in trouble and you'll probably never hear about it. If tomorrow I get in my Tesla self-driving car and it runs a kid over then you'll hear about it everywhere and Tesla's responsibility will be invoked. Because whose else?
The bar for self driving cars is not being as good (or bad) as a human driver, they need to be orders of magnitude safer in all situations. They need to have airplane industry numbers, not Average Joe drunk driving numbers.
Self driving needs to be better than me, not better than average.
If you tell me that no such driver assistance exists, just apply it to scenarios where it exists, like lane keep assists and automatic emergency assists
If you are better than average you can keep your dumb car.
I drive about once or twice a year, always in a rental, and consequently never really know how well my car handles or its dimensions. That, plus generally being rusty, tends to make me fairly nervous.
Yet, nevertheless, I am in charge of a 2000kg block of metal hurtling around pedestrians and cyclists at 70mph.
Scares the shit out of me, to be honest.
The last time I drove was taking my rabbit to the vet at 2AM on X-mas day, where the vet said due to Covid I would have to wait outside. Not really practical when the temperature is below zero, so I used Zipcar!
Besides that, mainly if I'm going somewhere for a week with poor public transport and high distances to cover (impractical to cycle/walk).
Both of those things relate to the fact that I have significantly better situational awareness and driver training than the vast majority of people, including significant time on track racing cars. To match my situational awareness it will not be enough for self-driving cars to rely on their own sensors, but they will need to communicate with one another and act in concert, otherwise my reading of traffic patterns and looking far ahead in the distance will always beat it out for the quickest path through traffic, or the safest response.
To me, where I see a benefit for myself personally, is that self-driving cars don't necessarily need to be better than me, they just need to be better than most, and turn driving from an active to a passive activity. It means I can be engaged in other things. But, that's also the rub. I /like/ driving, so I don't want to exist in a world where I am /forced/ to have a self-driving car to use the roads, as I still want the ability to go out and enjoy myself from time to time. But for a daily commute, turning driving into a passive activity would make me a happier and more productive person (other people are horrendous drivers and it frustrates me observing their unsafe and careless behaviors).
That's a psychological problem that self-driving cars will have to overcome. They need to be so obviously better than any human driver for people to actually consider them. A handful of fatal accidents that people think they could have avoided as a driver and they won't want to get on board.
People are bad at risk assessment, but that's a fact of life. As a result, self driving has to be better than what people _perceive_ as their risk of being in an accident while driving, not what the actual risk may be.
Machines doing horrible mistakes that no conscious person would ever do is problematic because we don’t have reliable error correction methods for that kind of mistakes.
A FSD car will never accidentally press acceleration pedal whey trying to press the brakes or loose control when trying to read an SMS. Instead it will mistake a bird for a train, hit and run someone and it would be like “something slowed me down, are my batteries degrading?”
How do you deal with a driver that fails to understand what’s going on?
It's not that we're bad at driving, I suppose, it's that we do an awful lot of it and it's quite a risky activity - especially if you're not 100% engaged.
That being said, if the US cared about road safety, there's quite a lot they could do to improve it already. Many techniques have been used successfully in other countries to significantly reduce road deaths, for example, a nationwide vehicle safety inspection standard and lowering speeds on urban roads.
The fact that these aren't being done leads me to think the concerns about road safety are actually rather irrelevant for the country as a whole. When self driving cars exist and are convenient, people probably will switch to them, as long as the accident rate is vaguely comparable.
All vehicle-related deaths in the US average around 32,000 annually, of which about 50% of those are alcohol and/or drug related, leaving the true accidental vehicular death number around 15,000 annually.
The problem we're trying to solve with FSD just doesn't exist at the levels FSD proponents would have you believe. Humans are incredibly safe drivers when you account for the number of people driving, and the number of miles driven per year.
Criticize human driving all you want, but even the worst of us can typically manage to get from point a to point b... Somehow.
I mean, that industrial agriculture eventually results in food on tables across the globe. So many systems at work. Run by the same people using their smart phones in their cars, haha. We manage to collectively make things happen.
Easy to criticize, but hard to argue against the results?
A million miles sounds like a lot, but it isn't. The average American drives 12,000 miles annually. Assuming a driving career of 52 years (from age 18 to 70), the average American will drive 624k miles. So the lifetime chances of getting into a serious accident are not exactly small.
(I don't know how to calculate the probability here, actually. Is 624k/1M the lifetime probability of getting into a serious accident?)
624k/1M is the expected number of serious accidents.
The probability of getting into at least one serious accident is 1 - (999999/1000000)^624000 = 46% (Every mile, you have a 99,9999% chance of no serious accident. You want that 624k times in a row).
And yet there is no evidence that computers are better drivers than people are.
I do have to ask, are we using the same computers? I've been using them for decades, and they're consistently buggy, error prone and straight-up factually wrong a lot of the time.
But there is no reason we should be happy if a self driving technology "only" kills 1.2 million people in a year. That number is absurd and should not be considered acceptable. I think in the semi-distant future we are going to look at manual operation of a motor vehicle as a dangerous party trick, something only to be attempted by professionals or in limited circumstances like pulling into a field to park or some other low speed maneuvering.
1. That's an incredible amount of saved time, people now get that time back that they use to have to spend driving. We would have eliminated the largest suck of human time on the planet (truck driving). Etc. The main benefit of self driving is not safety.
2. We have a working baseline that we can improve upon to drive that number down, and since computer programs don't have the "new drivers need to learn from scratch" problem, those improvements will stick around approximately forever.
totally agree but I'm skeptical of current tech to get there in the near time frame being talked about. It's all in how you define semi-distant
Computers are better than some drivers, but they're worse than others. If the goal is safety, computerized assistance is almost certainly better than self driving. Keeping the human driver engaged and doing their best, but having the computer supervising works a lot better than having the human supervising; it's hard for a human to remain alert enough to intervene quickly enough when needed. I don't know if there are good studies yet, but I expect automatic emergency braking to reduce severity of a lot of collissions. Cross traffic warnings probably eliminates a lot of minor (and some major) collisions. There are systems now to detect wakefulness; if those are combined with something to safely pull over and stop, that could reduce a lot of tragic collisions as well.
If the goal was safety, a computer assisting a competent driver would be the best. If the goal is private profits for a few individuals, then a computer that doesn't take a paycheck would be best.
Computers currently are far worse than the good drivers out there. It is not clear if they ever could be better than a trained and cautious driver.
For example you can easily avoid the situation above by not talking on the phone while driving across rails.
if the AI abruptly crosses the median how are you going to avoid that?
I laughed at this, thinking about how the German trains are usually ±30 mins and often off by hours.
The train schedule would probably not fare better than a coin flip, except in Switzerland or Japan.
Just check the train locations instead :)
Similarly, lidar resolution is a great overkill, and overcomplication over a moderatly sophisticated mm wave radar, which is on top of that will be much more durable, and reliable.
Much easier than all of the weird edge cases related to the behavior of other drivers or pedestrians.
The reality is that for a long time we will combine both sets of capabilities and use "self driving" tech to enhance human driver capabilities.
In that case self-driving first needs to be able to avoid the relatively simple case of not of smashing through a barrier and the human driver can use their wetware to figure out how to handle railroad crossings which are diverse and complicated.
I don’t want to risk getting figuratively in a car with one if I turn on FSD.
Winter driving as a human driver requires an entirely different approach and Waymo hangs out in the Arizona desert where there is basically never any inclement weather.
I think we'll get such a war within the next decade, so we may see FSD vehicles sooner than expected, just not in the way we want.
* Coverage as a percentage of scenario's occurrence over the total duration of driving. For example, over 90% of a long-distance trip will be spent on a highway following traffic patterns within a lane with the occasional lane change.
The hardest part was that Google hadn't mapped a service drive, so it thought the adjacent service drive was the best route (which would be addressed pretty fast if you were trying to deploy self driving service in that area).
I don't think we get to level 5 very soon, but level 4 cars will have the ability to go lots of places pretty soon. If I overestimate driveway distances from yesterday, it's like 99.966% lane miles. There was some construction, but it was already well marked for human drivers.
Humans are terrible drivers. If a self-driving car got into half as many accidents as the average human, it would save millions of lives. And kill people, to be sure, but fewer on net.
I also think you could make a reasonable argument that all cars should be banned, right now, based on how many people they kill, but since I don't think that's gonna happen...
I'm a pretty attentive and cautious driver. In the 20 years I have been driving I've been in one accident and that was because another driver was attempting to make an illegal left turn, came across two lanes of traffic, and t-boned me. So if self driving vehicles are only doing better than the worst human drivers, I'm going to be pretty hesitant to turn over control. I'd be in favor of that other driver that hit me using an autonomous vehicle though.
It doesn't seem impossible for there to be a long tail of driving (in)ability: most people drive pretty well, with a small fraction that are distractible or reckless enough to account for most accidents.
Think about it — most software is deployed cloud first these days, but one of the most complex computing tasks we have is relying on some black box computer.
I'd be willing to bet it's a quarter century or longer problem. (Longbets, anyone?)
FSD is absolutely achievable, but the task is much bigger than some proponents give it credit.
Then the steering-wheel-less car at least has the ability to call for help when it's only 98% sure of what to do instead of 99.99% sure. But obviously this kind of model only makes sense in a fleet context, not as an extension to something you own, so it requires greater shifts, at least for personal automobiles.
I love these wildly over-the-top exaggerations.
When was the last bridge you saw without a barrier to prevent going off the edge?
What makes you think a vehicle vision system will handle "blinded by sun" any worse than humans already do?
Remember it's projecting and predicting the road ahead, even around corners and in the dark - so being blinded by the sun isn't going to cause it to swerve wildly off course and off the bridge - it can continue to use the data it had before being blinded (just as you do).
Also remember it has eight cameras it uses for this. The 16 year old new driver texting and talking to friends coming towards you at 60mph has two.
That's a disingenuous question. Many bridges have wooden guards that will prevent you from going off if you make a small mistake, but not a large one.
> What makes you think a vehicle vision system will handle "blinded by sun" any worse than humans already do?
Humans can move their head, and block the sun with a hand, hat, or sunshade in the car. Humans have two eyes so if one is obstructed, the other may still get good vision.
> so being blinded by the sun isn't going to cause it to swerve wildly off course and off the bridge - it can continue to use the data it had before being blinded (just as you do).
Unless it's an incredibly well-trained AI, it may mistake lens flare for oncoming traffic, or a pedestrian. Car AI is not at the point where it has common sense to assume that lens flare is incorrect information.
The goal of an AI driver isn't no crashes. The goal is less crashes than human drivers.
My two analog eyes see this perfectly well, why didn't the AI?
I think there's a lot unfair criticism of human drivers in this thread. I don't think we're at the point where we can call machines better than humans when it comes to these tasks.
Self driving software tends to have very poor object permanence.
A real solution has to be capable of cleaning all cameras (probably faster than a windshield wiper would), because the distortions caused by even normal rain are hellish to train an AI to handle.
Where I live the streets are tight and most drivers mediocre at best, unaware that cyclists might have right of way and about 1/20 doesn't seem to know the difference between different kind of light settings in the car. At least half the cars on my street have visible scratches/dents.
For me that is the standard to be beaten, not perfection. And the car could still give signal and ask for a human to take over in some cases. Self-drivinf cars for me could also be much slower, no need to speed when you can read a book or play games
I kind of like Elon being able to say 'whatever', but on the other hand, it's not his money now, it's crossed the threshold into public financing, so statements like L2 v. L5 are 'material' and saying the wrong thing is a 'lie' and 'bad'.
Again with the paradox is that he's going to be hosting SNL which is kind of fun to see, on the other hand, it's going to be another occasion to hustle a stock or some kind of crypto which is distasteful.
They did the same thing with Skybox imaging/Terra Bella.
Except Waymo has never tried to claim L5 capabilities. They are and have always been a strictly L4 technology and they have said multiple times that L5 is not possible to achieve.
I am 100% unconvinced that Tesla can get anywhere with their current system. I don't see how their low resolution cameras can get the necessary information for Level 5 autonomy. It almost feels like a reckless brute force approach to the problem. "Just let AI figure it out". Every autonomous vehicle company is going to be using some sort of machine learning but they're going to be feeding in huge amounts of data. Waymo for example is using multiple LIDAR scanners to build up an accurate 3D model of the world surrounding the car. That's what you need. Not what is effectively guesswork by an AI.
We still don't have truly reliable face detection even after decades of research and we're supposed to believe that a car can reliably drive itself on shitty low resolution cameras alone.
That seems like a problem that could be solved with their current sensors.
But TBH I'm not really THAT impressed it can take corners and follow more lines. It still doesn't handle very important edge cases, and the people testing and uploading videos are naturally biased to show how good it is to push their referral codes.
I'm not anti-Tesla, I love my M3, but we need to be realistic about the future of what these particular cars can do. They're never going to be L5 with the current sensor suite - and they're certainly never going to be robo taxies. Who really wants that anyways? Last thing I want is some drunk bros to destroy my car and have to deal with Tesla support.
But it's non stop interventions...
He definitely uses ambiguity to his benefit (eg "soon", "by fall/winter/spring/summer" (in which year?)), but I haven't heard anything about Tesla being L5 by the end of 2021.
I went ahead and Googled one for you from 2020:
Tesla will be able to make its vehicles completely autonomous by the end of this year, founder Elon Musk has said.
It was already "very close" to achieving the basic requirements of this "level-five" autonomy, which requires no driver input, he said.
- July 2020
I remain confident that we will have the basic functionality for level five autonomy complete this year.
Basic functionality for level five isn't level five.
Who are you trying to fool?
I'm surprised you say this totally seriously. For me this means that level 5 will be this year, and then there might be extra new cool features, which are not necessary.
Basic functionality for level 5 sounds like both necessary and sufficient to me
That's still not sufficient, because it needs to do that in all circumstances (vehicles, people, animals, stuff, weather, etc), but even if it can handle all the circumstances (also basic functionality), if it can't at least all roads, then it's not level five.
Right now, it's level two, and it's IMO going to be level two for a while. People can ask about level five, but I take any statements about Tesla's current autopilot and/or beta as statements about the level two system.
People can assume that those statements are about some future level five system, but I don't think that's an accurate assumption unless someone from Tesla says autopilot and/or the beta is now level three, four, etc...
If basic functionality does not need to be sufficient, then even the tiniest necessary thing is basic functionality? Like installing a camera on each new car? In that case, Tesla already has basic functionality for level 5 in terms of both hardware and software.
Tesla does have basic hardware needed for level two functionality, maybe more. They could have basic software functionality needed for level two or more with the beta, but someone would need some kind of audit to confirm that.
Google is universally accessible to everyone. Please don't be that guy who corners himself into a blind spot.
Musk claimed coast to coast self-driving trip by end of 2017.
edit - The closest is about the basic functionality for it being ready by the end of the year (see above).
Musk: So -- and this is -- basically I'm highly confident the car will drive itself for the reliability in excess of a human this year. [...]
Director of Investor Relations: [...] The next question is, why are you confident Tesla will achieve Level 5 autonomy in 2021? [...]
Musk: I guess, I'm confident based on my understanding of the technical roadmap and the progress that we're making between each beta iteration.
He did refer to the current FSD beta, which is a level two system, and expressed confidence in the technical roadmap and progress they are making between each beta iteration. Then he talked about the level of reliability they would need, capabilities of the system, and challenges they have.
I guess, I'm confident based on my understanding of the technical roadmap and the progress that we're making between each beta iteration. Yes. As I'm saying, it's not remarkable at all for the car to completely drive you from one location to another through a series of complex intersections. It's now about just improving the corner case reliability and getting it to 99.9999% reliable with respect to an accident.
Basically, we need to get it to better than human bio factor at least 100% or 200%. And the business happening rapidly because we've got so much training data with all the cars in the field. And the software is improving dramatically. The -- we also write the software for labeling.
And I'll say it's quite challenging. We're moving everything toward video labeling. So all video labeling for video inference and so there are still a few of new met that need to be upgraded to video training inference. And really, as we transition to each net to video, the performances become exceptional.
So this is like a hot thing. The video -- the labeling software that we work for, video labeling, making that better has a huge effect on the efficiency of labeling. And then, of course, the Holy Grail is auto labeling. So we're -- we put a lot of work into having the labeling tool to be more efficient when we used, as well as enabling auto labeling where we can.Dojo is a training supercomputer.
We believe it will be -- we think it may be the best neural net training computer in the world by possibly an order of magnitude. So it is a whole thing in and of itself. And this is offer potentially as a service. So some of the others need neural net training, we're not trying to keep it to ourselves.
So I think there could be a whole line of business in and of itself. And then, of course, for training vast amounts video data and getting the reliability from 100% to 200% better than average human to 2,000% better than average human. So that will be very helpful in that regard.
The question is "why are you confident about X", and the answer is "I am confident because we're making progress in these ways with Y".
I think the assumption that it's a specifically lie else really highlights where these ideas are coming from.
An investor asks “Why are you confident in L5 self-driving before the end of 2021?” and Elon goes on to explain why he is confident that Tesla will achieve L5 self-driving by the end of 2021.
Musk said he's 'confident' reaching l5 autonomy by end of 2021 in the Q4 earnings call, which is on youtube.
The only specific statement I've read from him is that the basic functionality for level five will be available this year.
What's interesting to me is that people seem to attribute what news articles synthesize about his statements as statements made by him.
Musk promised the coast to coast drive in 2017:
Then admitted he couldn't do it in 2017:
This isn't the media delivering a dishonest commentary. These are his words.
But yeah, we can have both.
Those were his words in that situation, and he was called out for it. In this situation, the media is delivering dishonest commentary, and trying to call him out for their commentary.
To me, FSD is ... FSD ... what's FSD if it isn't level 5?
Sorry, to Elon fans (and I appreciate Tesla), but it's nonsense and people are fools to put up with it, let alone pay $10k or whatever it is for the "opportunity" to have it later.
As for the specifics, it's what's described on the Tesla website.