Waymo still has to use "safety drivers". So far, nobody seems to have full self driving without "safety drivers". Which makes it useless. That's the big milestone to look for. When Waymo can get rid of the safety drivers, which they tried briefly last year, they're getting close to something useable.
Some products take a long time to bring to market. Xerography - first copy made in 1939, commercial success 1959. Although we're now 15 years after the DARPA Grand Challenge, so we're coming close to that 20 year wait.
Television - first broadcast, 1928, commercial success, around 1948. So 20 years agaon.
Roller bearings - Timken founded 1898, Timken bearings in 80% of US cars by the 1920s. 20 years again. (Although it took a really long time for them to convert railroads. First locomotive with roller bearings, 1923. In 1949, they were struggling to get railroads to put roller bearings on freight cars. In 1991, roller bearings became mandatory for US inter-line interchange of freight cars.) Air brakes and automatic couplers only took 7 years, but that's because the U.S. Congress made railroads convert and standardize between 1893 and 1900.
What took a really long time, in post-1900 technology? (Before 1900, manufacturing infrastructure wasn't really ready for fast deployment.)
A common saying around here is that we have two seasons: winter and (road) construction.
Construction zones have pretty much every obstacle to automated driving you can think of:
* painted lanes that don't correlate to the temporary lanes marked by cones
* lanes that don't correspond to pre-programmed maps / gps
* irregular and unpredictable vehicle and pedestrian entrances and exits (construction workers and trucks)
* Areas where traffic is reduced to a single lane for both directions, and must take turns coordinated by humans with signs at each end of the lane
* speed limits marked by temporary signs
* rough, temporary transitions between pavement and gravel
Unless we can somehow get every state to compel every road construction company and every autonomous vehicle maker to use a single communication protocol, implement it at every construction site (so autonomous cars are made aware of these dangers) it's not going to happen.
Oh, and said protocol has to be hack-proof so trouble-makers can't start convincing cars that they're in the middle of a construction zone and force them out of their lanes on normal roads.
It's conceivable that the coordinated effort could happen, but I'm not going to hold my breath (due to the sheer increase in cost to the government) nor will I trust that said protocol will have fail-proof security.
Why would it be easier for trouble-makers to fool autonomous cars? As a human driver, I'd be fooled by pretty much any road marking or guy in an orange vest.
It’s amazing, how much forgiving we can be for human errors ( accidents every year) but absolutely not for machine/autonomous vehicles, even when, statistically speaking, machines may make better decision much faster(or at-least no worse than human judgement)... I guess feeling/perception of being in control is more important to us...
Other interesting observation I find in every autonomous vehicle discussion is, how we only focus on edge cases... when in reality every tool that we use (including the car we been driving) today are built for general use case and operate under mostly a control environment.
Rather if we think autonomous car as additional pair of eyes and hands when we need it most might serve us well in short run before the technology gets mature over next decade or two.
I’ll be really happy and relaxed if my car can mostly (70-80%) drive it self to daily commute or next trip to LA; expecting it to be my chauffeur is bit too much, personally.
Maybe it works for long haul trucking though.
amazing, how much forgiving we can be for human errors ... but absolutely not for machine
Also, when a human driver's negligence results in injury or severe damage, criminal charges result. That's a deterrent. With autonomous driving, you can't prosecute an algorithm.
would "use at your own risk" vindicate the company behind autonomous vehicle? or owner is responsible for his vehicle's actions? i guess never in the history, we had so much advance automations in direct hands of consumer...
As for the failure, I have reasons to disagree... if autonomous cars are working under "unsupervised learnings", my assumptions is, it most likely will makes different decisions for same scenarios based on data on hand.... so thousand's of failure events... though it may look similar may or may not end in same results... similar to how we would react when faced to some unknown situation on road... your scenario might more likely to play out for bad batch of hardware devices/sensors/lidar/camera etc in autonomous system...
If it's sold as fully autonomous, i.e. significantly beyond Tesla's system today, I don't see how the manufacturer could not have the liability. How comfortable would you be to use a car that could expose you to severe criminal liability because some company made a mistake with their software?
The company responsible would also have a clear incentive to alter/destroy any damning evidence gathered in telemetry.
Not saying it doesn't happen. But now you've gone from a product liability case which rarely has individual criminal consequences to actions that clearly do.
If/when we get to this point, it will be "interesting" though. Outside of maybe the medical area, there aren't many examples of consumer-facing products that, when used as directed, kill people because sometimes "stuff happens." And people generally understand that's just the way it is.
It's not out of the realm of possibility to imagine government-approved autonomous driving systems that insulate everyone involved from liability so long as they're used and maintained as directed. See e.g. National Vaccine Injury Compensation Program. I'm not sure it's likely but it might become a possibility if manufacturers find they're too exposed.
There's a caveat here that this 70-80% must be contiguous and the car must be superhuman-level reliable in that segment. Otherwise, the "additional pair of eyes and hands" significantly increase the danger. If your car suddenly decides that it can't handle something and asks you to take over in the last second, you won't be able to handle it either.
Which is actually a big win as long highways drives are boring and probably have a decent chunk of more serious accidents.
It doesn't give you the robo-taxi use cases that are what a lot of urbanites care about the most. But it would be a nice safety and comfort add-on for how a lot of people spend many hours of their weeks.
Like any risk, you also need to consider the impact of getting it wrong. If an audio assistant gives you the wrong answer to the population of your hometown, no big deal. But if your car thinks everything is okay and drives you into a stationary fire truck on the shoulder of a freeway when you are travelling at 70mph, the downside of that edge case is infinitely worse.
Sure, humans can make these mistakes, too. But the fact is that your notional world where computers are able to make smarter decisions than humans about how to drive doesn't actually exist. No one has figured out how to make it work. And they won't anytime soon. They've solved all the easy parts. But it turns out there's a lot more involved in driving than all the billions of dollars poured into the problem so far can figure out.
My point is, computer with,
- more data, (historic on how to act on certain situation, live data for event i.e. sensor data, lidar/radar data, images) vs human driver who would not have access or the ability to process these.
- faster and parallel processing vs human driver
- single focus/goal (of driving from x to y safely and making appropriate decisions to achieve it) vs human driver (with "physical limitations", "emotions", "hormones" and other things that makes up "life") is more likely to be distracted...
computer with all of above advantages compared to human driver may able to make better informed decision much faster than human driver can do (and when it doesn't it's hard to know/prove if human driver would consistently make better decision every time for same situation)
having said above, I agree that tech is in its infancy and it's gonna take a decade or two to be matured and even after that human intervention just in time in some cases would be needed but for the most controlled/learned environment (which is 70-80% of total driving on day to day basis) these systems would be immensely helpful.
Note that self driving vehicles aren't different from humans in that respect, except they see much farther.
So, it takes a lot more work on the programming side to compensate.
Imagine someone hacking the 'construction zone protocol' and spoofing thousands of cars into thinking they're in a construction zone at once. You'd be hard pressed to fool thousands of geographically separated human drivers at the same time.
That only works if a police car doesn't come by and catch the perpetrator in the act.
With a wireless communication to automated drivers, someone could plausibly feed bad information from a hidden or otherwise remote location.
Beyond that, just as automation allows human-intensive processes to scale by removing the humans, fooling automated drivers can scale much more readily than fooling human drivers.
As soon as it becomes a robot, a lot of the social pressure to be a good person falls away. Less so if there are people inside, but I can see empty autonomous cars being given a pretty hard time just for kicks.
Once you have autonomous cars that driving safely, but can't manage complex situations like you describe, you delegate those for remote pilots that are allowed to operate car in slow speeds. You need 5G network coverage
with mission-critical features (mcMTC) to archieve that. BLER 10^-6 and E2E latency < 5- 10 ms. Construction work crews might be required to erect 5G mini cell tower before they can start working to make sure that traffic goes smoothly.
Taxi fleet of 10,000 vehicles might need only 100-200 remote operators to manage the fleet. That kind of reduction in workforce provides huge savings.
I do wonder if that's a factor behind Musk's push into low-orbit satellite Internet.
> Taxi fleet of 10,000 vehicles might need only 100-200 remote operators to manage the fleet. That kind of reduction in workforce provides huge savings.
Even if all mileage was human-driven there would be very large benefits if you could really consolidate taxi drivers in call-centres for remote driving. No need to transport or preposition drivers and much less trouble estimating demand.
fleet of 10,000 vehicles might need only 100-200 remote operators
I was just pointing out that, if you can't guarantee you won't need to handoff to a physically present driver, then there are a lot of things you can't do with the car even if needed interventions are just an occasional thing.
Getting to absolute 100% will require either AGI or an incredible infrastructure investment. Now personally I think FSD is worth on the order of $1 trillion per year to the economy, so it’s the next great Moon Shot, and totally worth every bit of infrastructure investment we can throw at it.
But it makes sense to see how much further we can get with in-car algorithmic driving before the infra investments start coming in earnest to fill in the gaps.
Another possibility is there could be ways for a passenger to assist the algorithm without actually using a steering wheel and pedals as input.
I believe the level before truly perfect FSD allows the car to get stuck as long as it does so safely. Approaching and stopping at a single lane construction zone, for instance.
The current Tesla AP does remarkably well on highways with missing lane markings. A stretch I drive every day is ground down in prep for new pavement and just has the occasional white square marking, but it’s enough for AP to lock in on. It also seems to do fine with cones.
It’s worth noting that construction zones aren’t even particularly safe for human drivers (accident rate skyrockets). So technology to make construction zones more passable overall is important, even if it just enables self driving as a side effect.
The remote control can be to tell the car "drive this path" instead of direct control of the vehicle over a high latency link.
Essentially a POC that the car can see, but is so brief that the human cannot see anything.
> When Waymo can get rid of the safety drivers [...] they're getting close to something useable.
I suspect that they can cheat this one a little. As long as they're in an area with good connectivity and the cars are smart enough to pull over if they don't get human guidance, I expect they'll move the backup drivers to a central location. Call it SAE level 3.5. It wouldn't be good enough to sell cars, but it would be workable for a ride service, and would allow them to undercut Uber, etc, quite handily.
And that's leaving aside reliability. A 99.9% solid connection is not nearly good enough.
But consider a point where the cars are good enough that the remaining risks are ones where they can show that they can tune the hazard detection so that it doesn't miss any potential hazard situations, but may occasionally be overly cautious, and that it can reliably and safely slow down or stop short of potential hazards it doesn't know how to handle.
In that case you might get to a point where it's ok (safety wise, if not in terms of customer satisfaction) if the car stops for 30 seconds until a human safety driver reviews the data and confirms that what the car "sees" is not a dangerous situation.
In that case you might have e.g. 10 cars per safety driver, or more, and most of the time the car might not even stop - if a driver is available to respond immediately it may be sufficient for it to slow down until it gets a response. And you can simply slowly reduce the number of safety drivers as the cars get better. For a fleet service you might well never stop having some people monitoring to respond to unexpected conditions.
Of course, for this to be viable, the car needs to be possible to be made safe without human intervention, but that safety may be achieved by opting to stop or slow down the car in situations where continuing might be perfectly safe (and with the caveat that this may e.g. restrict where it may be possible to let it drive etc.), but where the car can't yet tell by itself.
This of course presupposes specific types of failure scenarios where the car can safely find a way to come to a stop but can't safely determine if it can continue forwards. It's not a given that's achievable with low enough effort (relative to solving the issues that might cause it to fail to spot a hazard) to be worth it.
It would be to annotate something the algorithm flagged as impassable in some way such that the car can continue driving itself.
If the car entirely fails to identify a driveable lane, I don’t think you can remote in and actually active steer a human-occupied 2 ton vehicle over 4G.
Of course, it's still possible that the vehicle will somehow be out of communication when it encounters something it doesn't understand. A cell jammer, say. In which case, it'll do what it will do if it, say, detects engine trouble: it'll pull over and wait.
[Also seen in the collection "The New Space Opera 2".]
I dunno about that. Maybe it's not immediately valuable in terms of being cheaper than picking up an Uber, but there is still value in getting your sorta-working prototype on the road for data gathering. Your Uber drive isn't going to have a vehicle with dozens of sensors to help your ML bootstrap, that's for sure.
The value is in the data, not the immediate convenience.
(however what's most interesting about that slide is that we no longer can adapt to the rate of change/innovation in technology)
So don't think things need 20 years anymore - how quickly before Twitter, Instagram, Facebook and Google became mainstream?
Actually, the period of greatest technical change in human history was probably 1860-1910. Steel, electricity, autos, airplanes, subways, radio, elevators, skyscrapers, machine guns...
Pervious analog to digital transitions have been easier because we could afford to lose information. (IE Audio, from records to CDs.) It was just a matter of getting the digital sample small enough. Information was still lost, but not enough to matter.
But with driving, it matters. We can't digitize the world to a point where a computer can drive better than an alert trained human.
With self-driving cars we have no such prototype and I would not expect one in the next 20 years. Unless we are talking about driving on specially equipped roads/lanes or in some special situations, where we might see them much sooner.
Fusion reactors. Still not there.
Why would it riding on public roads matter when it's about what happens inside a private vehicle during a high-value R&D experiment? They got permission from Arizona to run the tests and it will likely benefit Arizona's economy in return being the first to get the most training data.
Or, the author is suggesting that public resources should be used for public, not private benefit?
Of course, the potential for reduced accidents should help. Unfortunately, part of why AZ was probably chosen is that it's mostly sunny/clear weather during the year.
But once the technology is developed and available the cost will probably be pretty low. A couple of sensors and processing. I feel like maintenance cost will likely be smaller since a self driving car in theory at least can detect any issue much quicker then a human.
This is all speculation of course but human drivers are very expensive. Quick googling says about 20k to 45k per year for a truck driver. Even if initial investment in self driving technology costs 50k per unit more it's still incredibly advantageous to do so.
It's tiring reading commentators say things like this. Saying a professional journalist lacks "journalistic integrity" is bold. Why don't you step up and make this argument directly to the author, rather than being snide and posting it on a forum he will never read?
Here's his Twitter; have at it:
2. I have confronted journalists whose practices I disagree with when I meet them in real life, and in general their excuses are not impressive. Most recently I challenged Ivan Semeniuk, science journalist for the Globe and Mail, when he visited Perimeter Institute for "The Future of Science Communication", a panel discussion we were both on. (Semeniuk was endorsing a different common journalistic practice I disagree with, not the same as exhibited by Amir Efrati, but I can't share because it was a private conversation.)
3. Challenging Efrati would be like confronting every panhandler who tells a false sob story. Newspapers are full of this sort of writing, and you could spend your life objecting to it.
A research experiment that exposes the public to additional risk should be performed with the maximum amount of transparency possible. It is frightening that this even needs to be spelled out.
Its a low bar, for automated cars to be better than human drivers. Sure they'll have their 'blind spots'. That's no condemnation of the whole industry. Because what we have now is not very good at all (fallible humans, all different). And when automated drivers have an issue we discover, they can all be fixed. Try that with humans.
Even tired, impaired, or distracted drivers still behave in semi-predictable fashions - they tend to overreact.
No one is saying entirely driverless vehicles are ready yet. Even some of the earlier hawks who claimed next year have backtracked.
Nothing wrong with that, all software deadlines are usually 1.5-2x longer than initial optimistic estimates.
If waymo wants to have the privilege of secrecy they can run experiments somewhere not open to the public. That should be the standard we apply to these companies.
Although I wonder if seasonality is important here - obviously Phoenix isn't Minnesota but could driving conditions have been worse in Q1 compared to this summer, from the perspective of a self-driving car?
An example from my most recent drive: I drove through a common where cows graze. I doubt a driverless car is programmed to slow down near cows.
The only solution I can see is to whitelist roads and start with the simplest (motorways/highways) then gradually expand to more and more roads. I guess that's kind of what they're doing - suburban America is very easy (although you still have pedestrians and cyclists to deal with unlike motorways), and maybe whitelisting is the cause of the routing complaint.
The whole "if it's not amaaaaazing it's terrible" thing is idiotic.
The bitch of this business is the long tail of possible scenarios - before people have confidence you need to solve the long tail which is hard because not as much data / much less predictable. Sounds like from article though they are making headway!
Waymo employees, who are encouraged to be especially tough, give reviews that are 47% negative. That's likely closer to the metric for perfect.
We don't get perfection from human drivers either, of course. Though part of the promise of Waymo is much better performance than humans, which it apparently isn't close to yet. And this data is for relatively common cases: for the long tail, one-in-a-million case, the conventional wisdom is that humans would do better.
You'd be surprised. I've had a taxi driver turn up train tracks before.
So maybe they could start working in 2 years in sprawled suburbs in hot areas where you don't have many cyclists or pedestrians? Or is that Phoenix already and it's still too hard?
That quote at the end:
> I guess Lyft has me spoiled. I like getting dropped off in front of the place im going too [sic] not just in the parking lot....
Then again, my idea of "reasonable walking distance" seems longer than most people's. Having spoken to many drivers who have parked in the bike lane, I'm amazed by how negatively some have reacted to me recommending that they park as little as 50 feet away. In some cases the non-bike-lane spot is closer but the convenience of pulling to the side of the road rather than doing a more complicated maneuver seems irresistible.
If Waymo follows the law, good for them. Makes me more likely to be a customer of theirs in the future.
What I suspect people are complaining about is that Waymo doesn't do curbside dropoffs at locations with a parking lot -- not common biking routes. I bet Waymo doesn't have the data to know whether a curb is painted yellow, blue, or red, and just avoids them, while a Lyft driver would probably put on hazards and drop people off at yellow curbs and bus stops.
Uber/Lyft drivers break the law dozens of times a day. In fact the entire experience is predicated on their ability to pick you up/drop you off in places they shouldn't e.g. out the front of your house.
I guess self driving cars will be closer to Uber Pool in terms of experience.
Another comment mentioned in the story says the car skipped the drop-off location and inched passed a bus stop, and other people mentioned inefficient routing.
I don't know how the system works, so this might be user error of some kind, but plenty of people are not going to want to get in a taxi if they feel like they have no control over where it's going or where they can get out.
I have to believe the last block or two problem will be a big issue with self-driving whenever it eventually arrives.
- Uber and Lyft and more toe-stepping and will encourage their contractors to paint outside the line, deal with the consequence once the administration has caught up with them and is presented with the fait-accompli that this is voters’ expectations now.
- Google/Waymo has better relationships with local authorities and can obtain the permit to drop people off after they’ve proven they are playing within the line — and can wait for, and eventually finance urban furniture changes.
Both use people’s expectations, but differently.
What's up with this statement? Should I be forced to publicize my phone calls just because I made them while driving on public roads? It's utterly bizarre that the author thinks he is entitled to see Waymo's data.
Your individual phone calls are made as part of your general participation as an individual in society. One of the largest companies in the world, which does its best to avoid paying taxes, is using public roads as a fundamental part of the infrastructure for a project to generate data.
Maybe a better analogy is people who grow large amounts of marijuana in national parks? Yes, it's true that the growers are part of the public, and that the public owns the land, but...
I wouldn't have a problem with stuff like this if corporations were taxed at reasonable rates, and didn't participate so wholeheartedly in efforts to corrupt our democracy. Google donates to many truly vile, despicable politicians in order to shirk accountability, hamper regulation, and accomplish just this sort of de-facto subsidy and others like it.
Exactly. and drawing a distinction between the two is essential. In the US it seems to be, an ideal anyway.
Corporate tax is incredibly high in the United States. It is why corporations funnel their money into other areas where it is taxed at a lower rate. No one gets away without being taxed. Payroll taxes, property taxes, use taxes, L&I taxes, etc.
We created nukes, landed on the moon, took sludge out of the ground and used it to power the world. We connected this world with wires and glass fibers to build a real-time global communication system that also gathered all of the world's information in a singular and immediately accessible place.
Building a self driving car is hard but really not as tough.
How quickly do think we'd get self driving cars if the USA spent 4% of the federal budget on it like NASA received in the 60s. (That's about $40B a year for a decade).
> How quickly do think we'd get self driving cars if the USA spent 4% of the federal budget on it like NASA received in the 60s. (That's about $40B a year for a decade).
At that cost we could adapt all infrastructure to suit self driving cars, instead of developing self driving cars to adapt to human infrastructure. But I think that kind of cost is always going to be beyond what's acceptable.
I think the discussion is mostly pointless because of diminishing returns: if you can have "99.9% full self driving" for a tiny fraction of the cost, who would want to pay to go from 99.9% to 100%?
Initially human remote drivers will take care of the rest. And then there is a very slow commercial race towards using fewer humans that drives the very slow march to 99.9% and 99.99% self driving and so on. Driving the last second of the last edge case route is basically something that requires AGI (as long as we don't adapt infrastructure).
I think remote drivers will probably have to "rescue" cars without piloting them, usually just assessing a situation and overriding something (driving through an obstacle etc). A passenger (if there is one) could do the same. But sometimes actual remote driving would be required of course.
I don’t think we’ve demonstrated anything autonomous beyond the most trivial kinds of autonomy (e.g. the V2) in all of our technology history.
It's an extremely narrow set of problems that have to be solved incredibly well. It mostly just comes down to creating an accurate 3d representation of the world from a bunch of sensors. You also have to correctly segment and label each object in that 3d representation. If you did those two things extremely well, the actual driving logic can be hardcoded.
The problem is that each of these systems has problems so they all have to improve and compensate for each other.
I don't know whether or not AGI needs to be developed to make a useful self-driving car, but as time goes on I'm beginning to believe that's the case.
Predicting motion once you have small time slices and very accurate 3d representations is very very easy. You can easily calculate expected paths. You have to remember that computers see the entire situation at the same time. A bike doesn't just cut off a self-driving car the same way it does for a human. Humans are slow, our increments of time are large and in the hundreds of milliseconds and we can only focus on a couple of things at a time. A computer will notice the slight change in velocity and acceleration within single-digit milliseconds. Then it just has to predict the probability of collision. These calculations are simple.
Deciding what to do in these situations can very much be efficiently hardcoded using decision trees. No one right now working on self-driving cars dares to use a neural network or any other unexplainable & unbounded ml algorithm for policy. You have to be able to hard code in new edge cases as they emerge. You have to be able to study specific crashes or incidents and then adjust the decision-making scheme to specifically avoid that situation in the future.
Truly, the hardest problem is taking in data from multiple sensors, segmenting it, and then labeling it. All in real-time. The sensors are faulty and super expensive. There are also so many different objects out there. If you actually look at the ancillary startups in this industry. They're not working on "common-sense" general intelligence algorithms. They're working to make better & cheaper lidar. They're working on computer vision problems. They're working on image segmentation.
Let's say you're driving through an intersection with a green light, and there's a pedestrian waiting to cross. The robot has the right of way and goes, but suddenly the pedestrian decides to cross in front of the vehicle. Even if the reaction time was 0.00 seconds it's too late to avoid a collision. The problem is the robot didn't anticipate that the pedestrian was going to cross despite not having the right of way. Humans are better at reading social cues than robots. Maybe robots can learn that, but it's a significantly harder problem than path planning and image segmentation. This applies further than pedestrians and also drivers and predicting their behaviors on the road. And if you try to drive cautiously to avoid this potential scenario, you effectively stop and crawl every time you see a pedestrian and are not very useful for moving from point A to point B (not to mention all the pissed off traffic behind you).
The reason it's difficult is because it's an uncontrolled environment, and the robot has to be able to anticipate what other drivers/cyclists/pedestrians will do. Robots have done wonders in controlled environments, but trying to bring them to the real world has always been a struggle.
The standard isn't "perfect under all conditions", it's "better than a human". Humans are, honestly, pretty bad at driving. The bar is not that high, perhaps unfortunately.
Why does a robot driver need to anticipate this? Does a human driver need to?
If I'm walking up to a pedestrian crossing and a car is approaching, I don't just step out into the road, even though I have the right of way. I try to make eye contact with the driver to see if they recognize I'm crossing. They'll often nod or do something similar to signal that they're letting me cross.
A machine has to understand these social cues as well. It might even be helpful if the machine has a way to signal its intentions back to pedestrians.
Computers can also have a much faster reaction time, so a human may need to predict one second ahead, but computers may be able to get away with less.
This is an assumption and has not been shown to be correct or even probable
Alternatively you could develop braking technology which gives vehicles a stopping distance of 0m, but this might be a bigger technological advance than full self-driving AI, and I'm not sure it would be that comfortable for the passengers....
One key part of driving is communicating - with pedestrians, cyclists, other drivers. This happens through body language and other fairly subtle cues.
When you can't make AI work for responding to questions given in text form on an extremely limited problem domain, how on earth would it work for something that's orders of magnitude less well defined and more broad?
I mean, it _could_ be hardcoded, but there's millions of edge cases so it's pretty infeasible. I agree with parent comment that full level 5 requires something close to AGI - the difficult part in in getting a self driving car is giving the AI something along the lines of "common sense", the ability to reason about what to do in an unfamiliar situation.
What happens when a street is temporarily closed but doesn't have the correct signage? What if there's a police office or road worker signalling instructions by waving his hands? What if the lights at an intersection stop working? What happens if there's a car burning on the side of the highway and drivers need to change lanes to go around it?
And these are just some of the problems in a large American city. Think about rural areas, places with more aggressive traffic, places with wildly different written and unwritten traffic rules.
It does require AGI, at least if you're planning to drive on most of the world's roads, and not only on some "pampered" streets in the middle of the desert or on heavily-regulated and very well maintained streets like in Norway or Switzerland.
As a human a I have a quite "accurate 3d representation of the world" but even so, many times I'm left dumb-founded by what the people driving on the same streets as me are doing. And even if you do manage to replace all those other people with self-driven cars, how do you account of cows ending up in the middle of an interstate road (it happened to me at least once), with wrong street markings or no street markings at all or with drunken bicyclists whom you can't see at night?
Here's someone following right along with the suggestion that regulations, not lack of technology is to blame: https://www.wired.com/story/outdated-auto-safety-regulations... The author, part of the Competitive Enterprise Institute, works for Google: https://services.google.com/fh/files/misc/trade_association_...
That was page one of my search results, but suffice to say Google's been insinuating this for a while, both from the Chris Urmson era and the John Krafcik one.
I specifically referred to Tesla in my post as well. I saw the suggestion that Elon's claims about release dates for Autopilot features were effectively timed to manipulate the market. I'd give credit to that theory, or that Elon just has no clue how far he actually is from success. One of the two.
Both companies horribly misrepresent the fact that self-driving isn't around the corner.
To be clear, this isn't meant as a blast against The Information or ballmers_peak; they're transparent about their affiliation and that this is an article white listed for readers referred by Hacker News. It's also a totally appropriate article for HN.
That said, I kind of wish there was some in-line indicator of a motivated submission source. I guess a downside would be unethical publishers just laundering their submissions through ostensibly unaffiliated accounts, but I'd feel more comfortable knowing that we did have some kind of system here to encourage transparency.
The crowd helps us here and paternalistic rules can only prevent it from functioning correctly. If the post is good, it will move up. If not, it won’t.
Additionally, this is an interesting case as the custom link is an official way to bypass the paywall, which is the more frequent HN complaint: https://news.ycombinator.com/item?id=20414141
Comments are a different story, e.g. if a startup employee that works for the startup comments on a post about the startup, they should disclose it.
But for some reason these driverless cars do get most attention.
Talk about your all-time overfitting problem.
Now let’s assume it’s successful and sustainable. Let’s say we also manage to get it to work for freight. What have we achieved? Well, we’ve put a couple more of the jobs available to low-skilled workers on the scrap heap. There will be consequences to doing that, but I doubt Waymo will be footing that bill.
I think once you expand past the US, and past trucks, this is definitely "Next Google sized".
Lots of people could do with a driverless car.
But if we had "full" self driving, as in the car can drive empty, that opens up new possibilities.
First of all, I could avoid owning a car all together. I could be part of a car sharing/pool system where I can summon a car when I want it instead. Those pools exist but their biggest drawback is you have to get TO the car instead of having it on your driveway.
Second, I could get driven home after having a drink. Where I live the legal limit is 0.02 so one drink equals a Taxi.
Those are features I'd very much be willing to pay for.
Now imagine if every driver on the road followed all rules. Traffic flowed smoothly and there were fewer accidents. Your car dropped you in the middle of the city and drove out of sight till it was needed again. No one ever drove drunk or impaired. You could catch up on work or just watch something during your daily commute.