So I am super excited about any substantial progress in self-driving cars deployment.
Stay safe out there.
The danger are other cars on the road. I've been thinking about creating computer vision tech to spot inattentive drivers, it's something I think motorcyclists could really benefit from (myself included)
I imagine a system in which 'poor driver' tags would decay with time. Data is shared.
Would be neat for self driving cars to update this data in real time, so you could see a data overlay of their decisions and intentions. A long green arrow would show their intended path, icons show you if they have seen you, timers counting down e.g. I'm pulling away in 3, 2, 1 ... now
AR is going to be fun.
And potholes, debris, slick spots, ... the kinds of things that might inconvenience or damage a 4-wheeled vehicle can be deadly to a 2-wheeled vehicle at 60+ mph. Losing control of a car, you might skid, spin, or flip. Rolling a car can be deadly. Losing control of a motorcycle at high speed will almost certainly cause serious injury or death.
You're probably not as good a driver as you think you are. After all, we can't all be above average.
Both my accidents were with HGVs, the reality is the danger is everywhere
 Only one was their fault, the other entirely mine
There's no way to make motorcycling as safe as driving, but last time I did the math I found you can recover almost an entire order of magnitude just by obeying the law and wearing a full face helmet. And there's plenty more you can do from there -- take an MSF course, avoid congested areas during your first year on two wheels, and get a modern bike with ABS. Statistically speaking, the benefits of all these things are compounding.
PS: 72.34 deaths per 100,000 per year means the average lifetime motorcycle rider has a ~4% chance of a fatal accident. Which is approximately the same death risk of a first time suicide attempt. Regular riders are even worse off.
I suspect that in the US, the lack of caution in bike culture is exacerbated by how relaxed our laws and regulations are (I've looked for data on this, but I haven't found great cross-country comparisons). We just don't impose on bikers in the same way we do on drivers. For example, we mandated ABS on all new cars nearly two decades ago, but we still don't require it on new bikes.
In fact, even in the US, I suspect riders have European laws to thank for the safety features on their bikes. It can't be a coincidence that, for example, Zero Motorcycles began installing ABS with their model year 2015 bikes -- just in time for the EU law that went into effect in 2016.
You personally can decrease the risks, but not enough to adjust for the 35x per mile risk and make them safe to daily drive.
Nothing made me feel safer on my new motorcycle than learning that stat. Don’t do dumb shit and your personal danger per mile can be quite okay.
Also you’re most likely to die in your first and your third year. Then the probability drops off a cliff.
Stage 1: unskilled, unaware => die
Stage 2: unskilled, aware => not die
Stage 3: unskilled, confident => die
Stage 4: skilled => live
Difference is that certificated flight instructors won't let you solo until you're well into Stage 2. But any noob squid with a credit card can ride off a lot with a liter-class bike, no questions asked.
Most common way to die is running off the road when going too fast into a corner. Followed by being too aggressive in intersections and failing to avoid other people making mistakes or not seeing you.
If a car forces right of way and you could’ve stopped but didn’t, it’s your fault. Even tho legally it’s their fault but you’re the one with less crumple zones so you gotta drive defensively.
- A decent proportion of accidents are on unregistered bikes or riding while drinking.
- Speeding, as always
- A big risk factor is older people returning to motorbike riding as they remember their old skills but dont have them any more.
- Having ABS is something like a 30% accident reduction.
There's still a bunch of risk on a bike but for someone sensible you can improve your odds a fair way. Though I guess you could argue that personality would likely have well reduced odd than average in a car too so its still a relative increase...
Anything (accident) happens you fall and hit something - ground or otherwise, or are run over at times, again because you are exposed. You lost balance? Same. You just got on sand at even half the normal speed? You might still fall and hit. Wet surface, mud, speak breaks, same same. You hit another bike? Same (might be for both). You hit car? Same for you, but not for the car driver.
When the bike falls you are the one who takes the hit. Most of this doesn't happen in a car. Just my observations as a biker. Yes, riding at sane speeds reduces these risks a lot.
I rode motorcycle for 8 years as my only mode of transportation and can clearly remember how little alcohol or marijuana was needed to notice a significant performance degradation (much more noticeable effects from small amount of substance use than when driving a car).
* Wear a goddamn helmet
* Don't drink and ride
* Don't speed
* Get a bike with ABS
According to recent US numbers (Uber's Elevate study), it's about 38x (per pax mile).
EDIT to add: table on page 17.
Which mile is matters, as is who is riding the bike.
When you know that how dangerous is careless driving, you drive more carefully
Not sure if I will ride again. I love it but despite my gear, practice, having taken advanced riding courses, it really creeps me out that I got into a situation where I could only minimize my injuries.
Ah, the infamous SMIDSY. "Sorry, mate, I didn't see you!" has claimed many of our peers.
fatality. broke the rules. NOTABUG.
We have radar and lidar, tried and tested technologies being used for decades. Unlike the wetware you speak of, this system doesn't get tired or distracted.
Yes motorcycles are smaller than cars, but that won't be a problem for radar/lidar, and with proper sensor placement, there won't be any blind spots.
Not sure how you can say that, given recent accidents (crash into pedestrian pushing a bike across a street, crash into a stationary lane divider). Possibly other challenges are even harder, but describing a task that real world driverless cars repeatedly fail in as "the easiest" strikes me as overly cheery/Pollyannaish.
It boils down to execution and application of the technology. The key elements, radar and lidar make it really easy to detect objects around the vehicle in real time and react quickly. Now how an autonomous vehicle company decides to use these technologies matters in the overall effectiveness. For example, Tesla doesn't use the lidar at all for cost savings. Tesla relies on radar and machine vision for localization which to someone like me is scary. On the other hand, waymo uses lidar. Look at their accident rates, very impressive indeed.
1. Uber crash, Tempe, Arizona, 2018-03-18
2. Tesla crash, Mountain View, 2018-03-23
> Autopilot may not detect stationary vehicles at highway speeds and it cannot detect some objects.
To be more precise, then: I agree with you that might be "very easy" to detect objects and then, say, stop. But apparently, self-driving cars would then stop all the time needlessly, which is why they're not programmed that way.
So, what is hard, then, it appears, is to avoid crashing into people or objects, but still keep driving when safe.
I expect a pivot (>10y) to wetware if people really do* demand to give up their dv/dt. Rat neurons were flying F-22 sims a decade ago.
*(bad idea, I don't think we will)
I'm sure you'll learn the ever changing patterns ok.
have any self-driving cars been shown to work in motorcycle lane-splitting areas like what happens in california in the h.o.v. lane or anywhere else in the world besides the other u.s. states?
There is no evidence other than exponential curve worship to believe that driverless cars on the existing road network will ever be less lethal than the surprisingly high benchmark for sober drivers, and plenty of reason to believe they will increase car use. Waymo demonstrating their cars are surprisingly competent and less aggressive than human road users is very different from demonstrating their AI is smart enough to make a fatal errors less frequently than every half billion miles in real world driving conditions.
They are killing machines driven by idiots with low IQs. Why do I need to put my life in their hands every single day?
I dunno. There is a huge risk with driverless cars. I'm still questioning the tech. This is an incredibly hard problem and edge cases mean people die. This is one technology I won't be an early adopter of.
If you made a teleportation system where you could step in and come out the other end anywhere in the world, but there is a 1 in 1000 chance you step in and turn to green goo on the other end, should we still adopt it?
You basically save millions of lives, but people have seem to come to the notion that they protect their own fate when in a car (which is 100% untrue). You may be able to mitigate some accidents, but getting T-boned at an intersection when you have a green light, but no visibility, is possible every single day. Your wheel bearing could blow out on the highway and send you rolling into a ditch. A deer could step out in front of you and go through your windshield and kill you. You have no ability to save yourself other than praying.
But the edge-case argument also applies to human drivers. We suck at sudden, unexpected reactions. We panic. We overreact. We die. We kill.
I'm not saying that Self-Driving cars should just go on the road now no matter what - but the point of "edge cases kill people" is a perfectly valid argument against the status quo.
Oh, but you will.
You don't have to go that far. Smart people drive cars and they fuck up just as much. We (humans) just suck generally at that task, and with more and more assistive features coming in, the attention paid to driving is going down even more.
I’m totally serious. This is true for most every car trip. You trust yourself and the idiot in the other lane with your life every single day. It’s amazing how well cars protect us these days when we crash them. I’m excited about having them avoid the crashes.
Edit- downvoters please state your beliefs, there is no right or wrong answer to points 1-7 below.
Imagine you’re in a self-driving car going down a road when, suddenly, the large propane tanks hauled by the truck in front of you fall out and fly in your direction. A split-second decision needs to be made, and you can't think through the outcomes and tradeoffs for every possible response. Fortunately, the smart system driving your car can run through tons of scenarios at lightning fast speed. How, then, should it determine moral priority?
Consider the following possibilities:
1. Your car should stay in its lane and absorbs the damage, thereby making it likely that you’ll die.
2. Your car should save your life by swerving into the left lane and hitting the car there, sending the passengers to their deaths—passengers known, according to their big data profiles, to have several small children.
3. Your car should save your life by swerving into the right lane and hit the car there, sending the lone passenger to her death—a passenger known, according to her big data profile, to be a scientist who is coming close to finding a cure for cancer.
4. Your car should save the lives worth the most, measured according to amount of money paid into a new form of life assurance insurance. Assume that each person in a vehicle could purchase insurance against these types of rare but inevitable accidents, and then, smart cars would prioritize based on their ability and willingness to pay.
5. Your car should save your life and embrace a neutrality principle in deciding among the means for doing so, perhaps by flipping a simulated coin and swerving to the right if heads comes up and swerving to the left if its tails.
6. Your car shouldn’t prioritize your life and should embrace a neutrality principle by randomly choosing among the three options.
7. Your car should execute whatever option most closely matches your personal value system and the moral choices you would have made if you were capable of doing so. Assume that when you first purchased your car, you took a self-driving car morality test consisting of a battery of scenarios like this one and that the results “programmed” your vehicle.
There’s no value-free way to determine what the autonomous car should do. The choice presented by options 1–7 shouldn’t be seen as a computational problem that can be “solved” by big data, sophisticated algorithms, machine learning, or any form of artificial intelligence. These tools can help evaluate and execute options, but ultimately, someone—some human beings—must choose and have their values baked into the software.
Who should get decision-making power? Should it be politicians? The market? Insurance companies? Automotive executives? Technologists? Should consumers be allowed to customize the moral dashboard of their cars so that their vehicles execute moral decisions that are in line with their own preferences?
The fact that developers of autonomous systems have to consider the ethical and moral implications of their work is well understood, but these systems will still be much safer than human drivers.
Because it's the only aspect of autonomous cars that worries me and parent was talking about how much safer it will be. I know it's not rational from a statistical perspective and actually safer than human drivers, however on an intuitive human level it's hard to stomach that I may be algorithmically placed in harm's way.
I’m far more worried about the infinitely long tail of edge cases, overhype of AI and artifically inflated trust that comes from it, increased car usage on the road when people would rather be ferried than take a ferry/public transit, decrease in ability to manually control machinery as a population... these all seem like certain critical issues that will affect masses rather than hypothetical trolly problems that will seldom occur but are easy to obsess over.
For the opposite, see "I, Robot".
I initially thought of the main character's backstory (survivor's guilt over an algorithm rescuing him because he had a higher chance of survival than a little girl), but the main plotline is also about people being "algorithmically placed out of harm's way".
Until AIs are significantly more intelligent and capable than humans (and hence capable of solving this problem better than any of us could now), they'll do exactly what humans do in a crisis: Say OH SHIT and stomp on the brakes while trying to steer away from anything solid.
1. Do nothing at all because they're too distracted by their cell phone.
2. Do something worse than nothing due to panic and a lack of practice in such situations.
Personally I'm excited that these options will not exist for self-driving cars.
I'm also excited that we can talk about how the machines _should_ choose between the other options -- with human drivers we don't get that choice.
The only way to change this is by legally imposing a certain decision tree - and then it’s also considered “solved”.
What the hell environment do these non-technical types think the cars are going to operate in. Are they driving through day-cares or something?
These endless moral arguments are coming from non-technical non-business types who are desperate to try to contribute to the revolution that is selfdriving cars.
The very few technical types who make these moral pronouncements are identifying themselves as having a skillset so far out of date that moralizing is all they have left to "contribute".
It boils down to some simple "moral" choices. Cars are on roads, people are not. Some people will accidentally end up on roads. The car will have the wisdom to try to predict people being idiots and do its best to avoid them.
But very much like right now. Idiots who jump in front of cars are going to be darwin'd if the car has no easy option to avoid them. If the a child jumps out in front of a car, it too should, shall and will be darwin'd if the other option is injury to someone who wasn't a nitwit. I have exactly zero interest in being in a car that would say, "Oh, the occupant of my car is older than the nitwit child that just jumped out into traffic. I am now going to drive off a cliff to save the moron."
I don't care if a crate of orphans spills on the road. I hope my car will swerve if possible. But if any option involves risk to me, then a crate of speedbumps is how it shall be.
Why such "moral" choices? Because they won't be moral choices. The technical limits of weighing these things is not going to happen any time soon. The car will stay out of situations where it is at fault such as driving on sidewalks; after that it will do its best to mitigate for idiots, and then its primary purpose will be to keep the occupants safe. Otherwise, you will have the car doing things like avoiding a blowing garbage bag or somesuch that it identifies as a 4 year old child and then driving into a tree to prevent the travesty of scattered garbage.
(I mean, I sympathise a bit with the Reddit user's contempt for surveys which ask whether self driving cars should avoid over "a criminal" in preference to other types of people as if this were the sort of thing humans were capable of doing, but at the same time if I believed autonomous vehicle programmes were being run by people with his "occupant first" mindset - and I'm sure they aren't - they should be shut down immediately and permanently)
If the contrived set of circumstances that would lead to such a scenario actually happen to you, and somehow the AI is advanced enough to analyze the various possibilities while not being advanced enough to avoid potential dangers in the first place, congratulations you're absurdly unlucky. Potentially unfortunate for you, but fortunate for the millions that won't die as a result of taking dangerous human drivers off the road.
If the probability of the flying propane tanks is low enough, then I don't need to worry about the car's moral priorities.
On a statistical level, sure. On an individual level you can improve your person chance for survival greatly by not driving drunk and putting down your phone, the shitposting can wait. Also if you've made it past your teens your odds improve greatly.
>I want to say being injured in a car accident is somewhere around 20-30% lifetime chance.
That sounds absurdly high. 25% is 1 in 4. Assuming odds were equal or higher in the past then the average person has one grandparent that's been injured in a car crash.
Whatever the rates for injury/death are they're probably not evenly distributed among the population.
Chances to die of a car crash/ any kind of vehicle crash is about 1 in 100 according to here .
I can't find something similar straight away for injuries, but car accidents are ubiquitous and 2m people are injured every year (vs 30k fatalities) [1; cached]. So something above >20% seems plausible.
mortality due to transport accidents / total deaths = ~2000 / ~500000 ~= 1/250 or 0.4%
Let me just add sleep-deprived driving to the mix
4.6M / 330M = yearly risk 1.39%
x80 years = 111.5% chance of serious injury
I'm sure that includes things like whiplash though.
So the chance of no injury is (1 - 0.0139) ^ 80 = 32.6%, making the chance of serious injury 67.4% over 80 years.
Funny that you mention grandparents: if you graph the crash rate by age, you get a deep U-shape with the very youngest and oldest having a much higher rate.
Whatever we count as an "injury" should probably be more severe than the little mishaps we have with our bodies from going through life.
The fact that this is apparently an unacceptable opinion here is mind boggling to me
If we're basing out stats on unimportant injuries (minor bruises and stiff necks that go away in a day) then we're measuring something that we don't care about, injuries that do not meaningfully affect people. What we should care about is injuries that have appreciable recovery time (serious bruising, having a sore neck for multiple days, etc, etc) because those negatively affect people enough to be worth caring about. If we're including injuries we don't really care about in our numbers then something is wrong with our numbers.
I dare someone to actually tell me why you think we should not distinguish injuries that are so minor as to not appreciably affect people from those that do seriously affect people. Tracking minor and significant injuries in the same category misleads anyone who has to reason about or make decisions based on those statistics.
Cars are ridiculously dangerous. Something needs to be done about it. Good thing there's Waymo and other companies like them.
In my experience, humans on average are just quite bad at paying attention. The only reason autonomous vehicles didn't surpass our driving abilities years ago is because we tend to make our roads unpredictable (road work, human behaviours, etc.) and it turns out that humans are able to quickly comprehend those situations.
Research into autonomous vehicles is great and might eventually pay off, but let's not get distracted from real incremental improvements that are possible today.
I hope that a successful launch in Phoenix will expand out to other locales because this capability is a huge win for those groups that would otherwise not be able to make these trips.
It is also remarkable that it is possible to replace a regular vehicle operated by a human as a livery service, with an incredibly complex machine and still make a business case out of it. That says a lot about how far computers have come in the last couple of decades.
There are efforts to fit driverless cars to horrifically planned streets, but I don't see any cities approaching it from the other direction.
This is all said though because I freaking hate cars and the entire ecosystem surrounding them.
Can you blame them? There's barely money to maintain existing infrastructure, let alone rework it. It doesn't matter if it would pay for itself in 25 years, if that were the case we would be powered by 100% nuclear power years ago, investors don't bite on those numbers and republicans don't pass those budgets.
Maybe the driverless car players should work with the cities to improve infrastructure. They have all the capital, after all. However that logical reasoning isn't how the valley plays ball. Look at bird, look at uber, look at any 'pioneer' service entering these cities. They stomp their way into markets without trying to reach out to local governments and work with them, the city officials of course are annoyed and pass unproductive regulations, then the tech companies throw a tantrum and plead their pitiful case to their users. It's childish behavior that wastes everyone's time and money.
This is what should really be happening with the cities viying for the business of Amazon, Twitter, whomever.
I'm constantly flabbergasted by how short sighted both cities and companies are about the long term impact agreements will make on a given area. They think in terms of placating concerns via taxes but don't look at the true opportunity loss from not addressing future quality of life in an area. FFS tech companies should deeply be judged on this factor when being assessed for a new HQ, as they clearly are not currently going through a rigorous process which will play out in a decade plus.
Self driving cars are totally the wrong direction we should be going in, and it's going to cost the US more in the long run.
This goes against the research I’ve seen, which suggests a self-driving fleet can more-efficiently transport a dispersed population like America’s than rail. (For the relevant distances.)
Also, anecdote: after Uber, I use the subway and regional rail system more. It solves the last-mike and “should I rent a car” problems.
I want to go from A to B.
Do you want option a) be alone, £100, b) share car, £60, c) most efficient, switch to bus and train, £30
Unfortunately, it would take a decade for the government bureaucracies involved to even get started with that effort, so we'll have to make do with just the vehicle side for quite a while.
I think a similar thing will happen with VR/AR when it eliminates the efficiency gap of remote work.
If you believe in yourself, you will succeed
And no amount of not believing in someone will cause them to fail
So the pessimists are just wrong. And sometimes it’s fun to be wrong in a fun way... no shade on pessimists. But they are objectively wrong.
I'll also be interested to see how much this is them wanting to run a full production service versus an advertisement for their technology. Will it be more like Google Search? Or more like Android, where they maintain a small market share as a demonstration of where they want people to go?
It's possible that they were under manual control at the time, but I doubt it. They have a distinctive way of driving while under computer control, which I guess I'd describe as "like a grandma" but basically involves acceleration, braking, and turns that are a bit smoother than any reasonable driver would make them and a top speed that is a bit more law-abiding.
It is California, so it's always sunny (well, smoky right now) and most of these were daylight, but I've encountered them in a lot of different situations and consider them safer than human drivers.
A pedestrian on the corner next to a stop sign causes them to pause in the middle of the intersection for an uncomfortable amount of time, to the point where you aren't sure if you should just go around them or what.
On my bicycle and motorcycle, I don't trust them at all. I've had them come within inches of me on a turn, without slowing down a bit. Maybe they saw me and executed a highly efficient path avoidance mechanism, no way to tell, all I know is they operate around me in a way that fires all my 'watch out, this motherfucker doesn't see you' alarms while I'm on two wheels.
Ok, I'm being mean - it's a miracle they can go at all, let alone not hit people. Today at the aforementioned intersection, at the turn they always fail at, a bicyclist got ran over by a pick-up truck. The waymo SUV slowly pulled up to the intersection while people ran back and forth dragging the bicycle and his groceries and other debris away, and didn't run anyone over. Never stopped slowly creeping towards people, which was weird as hell, but hey. Handled a super odd situation like a champ.
>A pedestrian on the corner next to a stop sign causes them to pause
I am struggling to reconcile how the pedestrian does not have the right of way in both these cases.
> Pedestrian crossing: 50 state summary
> The majority of states, however, only require motorists to yield to, rather than stop for, pedestrians crossing at uncontrolled crosswalks. Nineteen states require a motorist to yield when a pedestrian is upon any portion of the roadway. Louisiana mandates motorist yielding when a pedestrian is upon the same half of the roadway. Nebraska requires yielding when a pedestrian is upon the same half of the roadway or within one lane of the motorist. Massachusetts mandates yielding when a pedestrian is upon the same half of the roadway or within 10 feet of the motorist; and 20 states mandate motorists yield when a pedestrian is upon the same half of the roadway or approaching closely enough from the opposite side of the roadway to constitute a danger. In addition, in at least five states and the District of Columbia, bicyclists have the same or similar rights as pedestrians.
The pause times do seem longer than a human would take (I noticed this with the right-turn with pedestrian case as well), but I'd rather that self-driving cars err on the side of safe rather than sorry.
This is why we can't have nice things.
2. Your example doesn't even seem to meet the slur's definition.
Personally, I downvoted it for the "civil rights" phrase. It is unnecessary, hyperbolic, and inflammatory; these discussions tend to attract such comments and they derail rational discussion.
See https://en.wikipedia.org/wiki/Jaywalking#Safety_consideratio... for the various unique instances.
In New York, right of way is pretty unilaterally given to road traffic. Outside of otherwise controlled interactions (signaled crosswalks), pedestrians are expected to give way to vehicles. And vehicles (to borrow maritime terminology) have a stand-on duty--they are expected to maintain speed and direction.
So it doesn't matter what posture pedestrians adopt, even when they are standing a full lane into the road (people commonly queue to cross the street in the parking lane, rather than on the sidewalk, leaving the sidewalk clear for people traveling along the street). Cars are expected to proceed---and you can bet they will be reminded of that fact by those behind them if they slow unnecessarily. Furthermore, New York's robust jaywalking culture means that these people are all peering upstream, watching for a break in traffic.
In contrast, California has very much opposite expectations. Cars are expected to yield to pedestrians in pretty much all situations (and jaywalking is rare). It turns out if you exhibit really any of the above behaviors in California, drivers tend to interpret that as telegraphing an immediate intent and request to cross the street.
When I moved to California, I was initially perplexed and frustrated by how it seemed like cars would randomly stop anytime I happened to be standing by a crosswalk, even when I didn't have any intention to cross, or when I was happy to let them pass by first. Lots of awkward "no, no, please proceed" gestures ensued.
Eventually I realized the big cue I was giving them was that I was habitually watching traffic. I now take care to be visibly not looking upstream when standing at a crosswalk. That seems to have solved the problem.
So I submit to you that the difference you're picking up on between pedestrians who do and do not have an intent to cross is whether they are paying attention to traffic.
I don't have to analyze anything, I don't have to negotiate with a driver, don't have to guess speed, I just go when it's my time to go.
A pedestrian jaywalking does certainly not have "the right of way". That doesn't mean the car doesn't still have an obligation to avoid hitting them. The rules of the road are redundant for a reason, one party not doing what they're supposed to should not cause a crash.
Keep in mind that both El Camino Real and freeway ramps are maintained by CalTrans, not the city. But the one time I submitted a issue about a freeway off-ramp they forwarded the report to CalTrans for me, so it's a good place to start.
It just now occurs to me that autonomous cars could have status lights. So that an observer can better predict what's gonna happen next.
Kinda like the purpose brake lights and blinkers serve. Flashing yellow lights indicates the car is confused, thinks there's a safety risk, or whatever.
An expanded visual vocabulary.
on the other hand a led matrix with pictograms would be great
This is a really interesting point to me, and I think it's one of the major weak points for self-driving cars: they have to pass a kind of Turing test.
You have a good example of something we do all the time when driving: try to read the mind of the driver. Don Norman wrote a book titled, "Turn Signals are the Facial Expressions of Automobiles", but it goes well beyond that. We use the "body language" of driving to infer driver intent.
I think the best short-term case is that people make excuses for the dumb robot as you do here. But a very plausible outcome is that we get an "uncanny valley" effect for robot cars where people resent and are creeped out by something pretending to be human but falling short.
Not sure this is 100% related, but the driver-assistance technologies in my new 2019 vehicle such as the lane keep assist, blind spot sensors, adaptive cruise control all suffer varying levels in inclement weather. My lane keep assist is completely unreliable in the rain, the collision detection sometimes misfires (but thankfully doesn't brake me) or mistakenly thinks an oncoming car around a bend is in front of me, etc. There are thousands of edge cases I can think of, just with my rudimentary understanding of the technology (namely lidar) are thinking to my self there is just no possible way they have come to a point where they are safe for general use.
As a rough rule of thumb, the companies testing level 4 autonomous vehicles on the road today are paying around $100k-200k per vehicle in hardware costs alone.
In either case, it isn't hard to imagine, the fundamental algorithms are the same -- recognize lane markings, adjust steering wheel, that kind of thinking. What do you do in the case that the lane markings have been worn down on the road? Freshly paved roads that haven't been painted? I could go on ad nauseam but point remains: too many corner cases, lidar or not.
There is a reason that these will be severely geographically restricted for a significant period of time.
Are you speculating, or is there a source for this point?
I don't doubt it, just wondering, as I think evidence for this would be fairly incriminating.
Interesting. I'm a little surprised this did not get more coverage.
No more than a human needs to handle.
The cars will operate 24/7, in a small, well-defined section of metro California. They can handle light rain, but won't work in heavy rain, snow, flooded roads, or one-way mountain roadways.
Though that document also says that they won't be charging passengers, so I guess it's a little out of date.
I suspect your perception of how transparent they're being might differ depending on where you're located; Waymo's disclosures so far have been mostly in the form of communication directly with the communities they're testing in; not as blog posts or press releases to national media. That document, for example, contains copies of emails that Waymo sent directly to city officials in the places where they planned to start testing.
Yes, I am definitely basing my understanding of their transparency with the public on what they have said to the public.
It’s certainly progress, but I would have appreciated a less click-bait headline.
Of course they're starting in one of the easier places to operate driverless cars. Why would they introduce more risk than necessary to the launch of an extremely complex new technology?
> It will operate under a new brand and compete directly with Uber and Lyft.
It wasn't clear until now what the plan was, if they would start their own service or run their cars inside the Lyft/Uber network. The fact that they will start a new service with a new name is pretty big news, and therefore not click bait.
When this launches, will the cars have drivers in them?
That’s how you can tell if it’s a driverless car.
Arizona has the highest rate of pedestrian deaths in the US. Pedestrians have apparently been trained to keep off the streets.
It's also not immediately clear if this will be in the city of Phoenix itself. The earlier testing was in the suburb of Chandler.
Huh? Taxis are for sure used way more in NYC than out in the suburbs. They benefit significantly from density. Out in the suburbs you can't even catch a taxi unless you call one to you.
The popular way of getting around in the suburbs is driving your own car. In cities mass transit and taxis are much more common, especially because parking is so expensive that it makes more financial sense to hail a taxi for the occasional trip.
In theory, to the customer, it doesn’t matter if a human or machine is driving, so you’re no longer dealing with a two-sided network, making adoption substantially easier, especially for a company like Google who can deploy massive capital.
The thing I never understood is how Uber’s investors rationalized this and thought it wouldn’t happen. Did they think self driving cars would not be a reality? Did they think Uber could get there first? Did they think there is still a profitable enough gap between current situation and the dreiverless future to hedge in case Uber doesn’t get there? Is the Uber brand and tech worth so much that they’ll get something back as an acquisition for whoever does get there first?
Seems like exactly what they thought. Remember Uber acquiring Otto and Levandowski? AFAIK Travis was pushing hard for autonomous driving but accidents like the fatality in AZ and missteps like Levandowski's fraught relationship with Waymo led to deemphasizing autonomous driving for the time being. Dara's main job seems to be to take the company public, which requires scaling and fixing the margins on the products they already have in place.
I don't work at Uber and perceptions are my own, so curious if any Uber employees agree.
Self-driving cars are a long game until companies can bring down the costs. It might be years before that happens. Uber has the advantage of being able to ramp up self-driving cars as part of their existing fleet until then.
Edit: Math mistake
Over 6 years, that comes to $114/day. 1 ride per hour at $5 per ride would hit $120/day.
Assuming a 2 mile ride, that would be 105k miles which is well within the car’s usable life. At 3 miles, it would be 160k miles which is still within a car’s lifespan. It looks like Uber is $1.35/mile with a $2.10 base fare and $1.85 fee. A 3 mile ride should be able to get $5.
Plus, it’s really about the long run. Operating margin might be negative for a bit, but the cost of the technology and manufacturing will come down significantly. Still, even today, I think $250k just isn’t that bad. I think most drivers will do a lot more than $5/hour in gross revenue.
It would be really expensive to make a fleet, but the economics are so compelling, even at high prices. I mean, Americans are often spending $35,000 on a car. If a self-driving vehicle can service the needs of 7 people, it can be cheaper than car ownership.
While the price of driverless tech might be high now, the prices of car ownership and human labor are both very high as well and only one of those three prices is likely to decrease over time.
Driverless cars are "always on" so it's not an apt comparison because you're going to always get more from less. Besides, ridesharing is already subsidized right now, so bottom line isn't an issue in the market.
Likewise, ride-sharing with self driving cars is even easier since if you own a self driving car it can work on it's own while you are not using it, which is most of the time. There's no reason a similar asset sharing model won't spring up after economies of scale.
You could also argue that it's cheaper to pay workers with their own textile equipment, rather than spend the CapEx on a weaving loom, yet here we are.
It would take one trivial piece of legislation to turn this technology into a souped up cruise control rather than the world changing technology that its backers insist that it is. How could you even fight that sort of regulation without your argument boiling down to 'we promise we don't need human oversight.' One lazy 'think of the children!' retort later, and it's banned faster than mango juul pods.
I don't understand the investor confidence here. To me this seems like basic research, critical for future technological developments, but a cash sink with no guarantee for profitability. Is this just a rat race between the giants throwing cash at this?
Large companies can adopt in other cities, or if need be, other countries eager to change their quality of lives.
It's like the word "trivial" has no meaning anymore. All I can tell from people using it these days is that they're very confident in what they're about to say.
You probably need over 100x the vehicles at 5pm as you do at 4am. You have to find a balance of how much CapEx is worthwhile -- probably more than you need to cover the 4am shift, but far less than needed for the 5pm shift.
Right now, rideshare companies keep ~40% of the ride revenue (not including vehicle payments). Which means, 60% goes to the driver. Now if you consider operating costs (fuel/gas/insurance/tolls/oil change/maintenance) at 30% (of the total revenue), there's still the remaining 30% that the driver takes home. Even if you think the driver is working 80 hrs a week, that's not even 50% of the total number of hours an autonomous car can work (24x7 = 168 hrs a week).
With a fully autonomous car, that remaining 30% and lower maintenance costs combined with lower insurance costs will mean probably close to 40% of the current ride revenue will be saved. If you operate the car for 160 hrs a week (8 hrs for fuel/maintenance) then it'll probably be ~60% of ride revenue (at current rideshare rates) that'll be pure profit.
As cost to produce these cars (basically it's the separate technology package added to regular cars) goes down, they can continue to keep the service super affordable while recovering the upfront investment in the $250K car in a few years. Instead of having 10 Chrysler vans on car dealer lots/storage, have one $250K car in service a day after it is manufactured. After that maintenance costs may increase slightly, but the rest is pure profit, which creates a positive feedback loop to
reduce vehicle price --> reduce final fare paid by passenger --> attract more users --> more vehicles needed --> Leads to larger scale production and reduction in price--> repeat.
At some point it'll be cheaper than public transit and then owning and driving a car becomes a need only in regions / places / conditions where autonomous cars wont work. Until then, non-autonomous cars ensure good margin for autonomous cars to thrive.
FWIW, I believe Uber has zero advantage with it's existing fleet. All they have is a large number of indentured drivers (due to subprime loans given to them for their cars). In fact it is a disadvantage for Uber, because as fares for ride share goes down, it starts making lesser sense to operate those cars. they'll just stop making payments and return / trash the vehicle.
I don't think anyone is interested in "mak[ing] real money that way right now", they want to get a foothold in, or capture a swath of, a market which definitely will make them money in the future. A tremendous amount if they play it right, even if it means operating at a loss for the next few years.
When they’re driving around with no safety drivers in Manhattan in the winter, we can talk about solved problems.
Unless the humans are losing money by doing so (which may be th case, but probably doesn't make a sustainable business model), the car service is ultimately fully paying for the car for the period it uses it, so it doesn't lose anything by doing so up front. It increases the up front capital requirements, but Google has more cash on hand than the market cap of some Fortune 500 firms, so that's not really an enormous issue for Google.
> It’s much easier to let car owners take the risk of investing in a car, maintaining it, repairing it (for the moment they don’t repair themselves), cleaning up the Saturday night mishaps.
Well, except humans lie about the last bit to get extra reimbursement, whichnpisses off customers. That kind of thing and other driver frauds is a compelling reason to get independent driver-owners out of the loop.
Your non-automated workforce + margin is my opportunity - google
Seriously though, I think mobility network effects can be overcome because they're inherently local, and they're "attackable" because there's decreasing returns to scale. Average distance to nearest ride tails off pretty sharply. Obviously not as weak a moat as scooters though, because it's still very capital intensive.
The first-mover also incurs some regulatory cost and rider suspicion the followers might not, too. Still, it's an enviable position to be in.
I think as that's a scary premise to be hanging the company on. First, it's too big a market for one company to control. This isn't Facebook. Second, self driving can make this into a one-2-many market without needing Uber. Third, rides have prices. This isn't Google. A cashed up competitor can price war their way into market share. Uber won't likely be able to leverage their position in the way digital monopolies have. Prices mean price wars.
If I were Uber, I would be looking for an immediate IPO.
If driving becomes a commodity, the bigger long term problem is the mobile phone being the gateway to calling a ride. The users may just use Google/Apple maps or the voice assistant.
This will mark the first ever commercial self-driving cab service.
Then separate companies build a business around each use case to monetize the driver.
1) Consumer rideshare
2) Courier / delivery
3) Corporate / fleet / trucking
4) License driver tech to OEMs
5) Lease self-driving vehicles outright
While I was there, the local TV channels ran segments a couple times a day. Very well done.
I assume it was paid editorial content.