You may giggle at Googles attempts to catch up with Facebook, but looking at this video.. boy, they are way ahead. Google isn't so much a search company, its an innovation company. They are already building the next trillion dollar industry while Facebook is optimizing sharing cat pictures.
It is normal for pedestrians to look for eye contact in order to know if a driver will yield the right of way or simply will accelerate and pass (in light less crosswalks). In dense pedestrian cities (I am from Spain) it is very usual to have some guessing and false start stop between the pedestrian and the driver, nobody seems to know who is going to yield finally. I wonder if this is been thought and how you'll signal the car's intention to a pedestrian?, maybe a forward braking light?. It could be that automatic cars will be much better yielders than a human driver.
It makes sense for the self-driving car to yield to pedestrians all the time. So if you see that there is no driver, you assume it will stop until you are out of its way.
There will be no more awkward social guessing games!
It will be interesting to see how people exploit the A.I.
There might be little tricks you could do to make an autonomous car yield. Or if you see one about to park, maybe you could get really close to it, and it would try to find another spot instead of fighting for the spot. Or pedestrians might carelessly walk in front of it, knowing it will stop for them.
Edit:
It will also be interesting to see how the technology changes as whole fleets of them are deployed. They will constantly be sending each other data: pothole detected on highway; I'm ahead of you and have braked suddenly; this road is congested, use alternate route; is there a parking spot close to my location?; tsunami warning - all cars go to high ground.
I guess someone will hack the software in his car to send a "road blocked" signal, so that he can drag race a friend on fifth avenue at rush hour. I also expect that it will become illegal to hack one's own car soon after that.
Stuff like congestion would probably be left to computers operated by the local traffic authority rather than the general public.
Of course you would need some form of hazard signaling which could be open to abuse (although it could be abused by human drivers anyway).
You can always monitor what cars are doing and if you find that some particular persons car is always behaving in a way that causes problems then you can potentially take legal action against them.
There is a second part on it. How will cars behave when they are driving along a crowded street in a place where people crosses anywhere and changing directions and intentions very fast. I don't think there will be dangers to pedestrian. But it's going to be a tought problem avoiding false emergency breaking. Once more you have to rely on visual contact to know what are the real intentions of people. Crossing, waiting for the car to pass, simply unaware of the car coming, going to pick something that fell at the border of the street but no intention to cross.
Not an easy task!
Also it is strange they are not working on a kind of transponder, that could be used between nearby cars to coordinate maneuvers(maybe I missed it on this article, but I think I read about somebody was developing it) . It will be usefull once robocars are more common.
Like any other technology, people will grab the wheel when the incentives don't pay off. Take for example the iPad. iPad is great as a media consumption device, or the occasional email. But as soon as I have to write longer than a few minutes, the iPad becomes a burden. In the case of a heavily crowded street a self-driving automobile might be able to trek through, but it may be more feasible to grab the wheel if you just want to move on.
Once self-driving cars become more commonplace, issues with eye-contact with pedestrians will naturally seek their own level as comfort ensues.
This doesn't make sense for many American cities where the downtown only houses the municipal government, museums, and boutique shopping, but the city itself has several other major shopping districts, not to mention residential and industrial districts. These different districts are often miles and miles apart with no easy walking routes between or often within the districts. For example, Anchorage, Alaska, is 1,704.7 sq. miles (4,415.1 sq. km) [1], and it can sometimes take over 2 hours to drive from one end of the city to the other.
My guess is that originally self driving cars will be mainly used on highways and take you to the metro hubs. Once you reach a very populated metro area, you will use the local public transportation.
They would probably BE the public transportation. We need large buses because drivers aren't flexible enough to deploy, and you need to have enough capacity to meet peak demand. If you can have smaller driverless vehicles, and just put more of them on the road during peak hours, it's much more cost-effective (and environmentally friendly) to have smaller capacity vehicles. I doubt we'll see more than 10 people per public transit vehicle once driverless transport becomes the norm.
A diesel bus gets 6mpg --- you only need 10-15 people on one to match even the most fuel-efficient of cars in terms of environmental friendliness, unless you're referring to something else..
That's assuming there is only 1 person in the car.
A car that does 60mpg with 5 passengers can move people 300mpg (of course you will lose some mpg due to the extra weight)
My guess is that automated vehicles will be both more predictable, and more dependable, at yielding.
When a pedestrian jaywalks in front of an oncoming vehicle, he's depending on the fact that the driver isn't distracted, and can pick him out of the visual clutter at the sides of the road. I'm a good driver, but I still find that it can be difficult to spot a pedestrian who is standing among parked cars at dusk, especially considering that my rear-view mirror obstructs some of my vision further up the road on the right side, as it does for a lot of drivers.
An automated car should always be paying attention (especially since it can pay attention to lots of things at once) and doesn't have to depend on human eyesight to detect potential collisions - it can utilize lasers, infrared and other sensors.
I guessing the robots will behave a great deal more deterministically than the humans. Its the old style human driven cars that will need the warning lights.
Here's a possibility. I'm not saying they will do this. It's just an illustration of what someone controlling the software in a fleet of self-driving cars could do.
You summon a self-driving taxi to take you to Bob's Bistro--known throughout the city for its steaks, and your call is serviced by a car using Google self-driving software. Google knows who you are and knows where you are going.
The car's route planner determines that there is a route that is within 10% of optimal that takes the car past Sam's Steakhouse, and Sam is currently paying for a Google ad campaign targeted at self-driving passengers. The car takes you past Sam's where Sam has a hard-to-miss sign touting reviews that say his steaks are better than Bob's.
Google is clever so the car might even time this so that you'll hit a red light in front of Sam's to give you a better chance of seeing Sam's sign. If the sign is electronic the message you see comparing Sam's to Bob's might even be specifically targeted at you. A moment later when the next cab goes by, carrying a passenger to Carl's Crab Shack, that sign might have a message touting Sam's amazing surf'n'turf compared to Carl's.
Right now, you go to Google when you want to drive more traffic to your website. With self-driving vehicles, you'll go to Google when you want to drive more traffic to your brick and mortar site.
This is a pretty paranoid and far-fetched scenario.
Generally, I'd expect that an automated taxi would operate under rules similar to current taxis - the obligation would be to take the customer to the destination via the most direct route specified, unless otherwise directed.
Taking someone on another route might constitute kidnapping - you're essentially transporting them against their will to somewhere they did not ask to go.
I didn't read the parent as paranoid at all. I thought it was pretty cool.
Further, wrt the optimal/most direct route thing:
* the algorithm could be tweaked to "adpresence" en-route fairly easily
* "most direct route" is an ambiguous term. There could be two routes that involve almost same distance in miles, but the slightly longer one is likely to take less time. Which constitutes most direct? It turns into an optimization problem, possibly with no obviously optimal point (particularly when you account for fuzziness needed to deal with unexpected traffic patterns, accidents, etc).
* Finally, the ad-laden route could be a free ride, whereas the "direct" route could be paid. This may or may not get around regulations, but as long as the time delta was not significantly bad, I would choose the ad route.
There's a crucial difference between what you're proposing and what the parent suggests - user choice. At least as I read the parent, he's not saying you're given a choice of what route, etc. That seems like a crucial distinction to leave out if he meant that it would be at the request of the user.
There is a crucial difference also between what you are responding to, and what I said. I am not necessarily proposing the user choice should be followed or is even necessary. It is just a possible variation on the theme.
If you get in a cab and say "take me to $x" it is the cab driver's responsibility to take the best route. However, since metering is distance AND time based, there is a lot of wiggle room in what best route means. It is always an optimisation problem, and relies on imperfect data of the future conditions of the route. Further, in most american cities, there are many routes that are no more or less direct, as a result of the grid system. Therefore the car taking you on the route that goes past paid displays is not kidnapping. It is choosing 1 of n equivalent routes - the one most profitable for the operator, while still meeting the criteria of the user.
Further, even with advanced traffic and condition understanding available to a robotic car, the ability to choose optimal path will probably improve significantly, even when the small fudge factor to go past advertising is accounted for. So what if it is 30s less optimal on your ride to go past the advertised place, when the human cabbie would have chosen a route adding 5 minutes?
"If you get in a cab and say "take me to $x" it is the cab driver's responsibility to take the best route. "
Obviously this can vary by municipality, but in most cities drivers are required to take the "most direct" route, rather than the "best" route, if for no other reason than the latter would be so subjective as to be useless for enforcement purposes.
So if you're in a New York taxi (for example), the driver in theory isn't allowed to take you off the most direct route without your permission, even if another path would be faster.
I'm at (1,1) and I want to get to (4,4). Assuming a restrictive interpretation of most direct[1] and no one-way streets factored in[2], there are still two equally direct and valid paths:
(1,1) -> (1,4) -> (4,4)
and
(1,1) -> (4,1) -> (4,4)
Which is most direct? If the cab driver chooses to go with the later because his buddy pays him to go past the billboard at (3,1), what's to stop him? What's wrong with that?
[1] If most direct means "shortest path", then there are many more equivalent routes.
[2] One way streets also offer a nice choice of where to do turns around a block under favorable circumstances. Similarly road construction offers extra choices by detour.
I'd assume that a driverless taxi would have an "I want to get out, please drop me off at the next safe opportunity" as well as some method of disputing the fare.
Besides, it would not necessarily even have to redirect you past a different location, it could just stream an advert to the speakers in the car.
It could also be possible that some routes would be subsidized, like you could enter the location of one restaurant and a rival restaurant could offer to cover your fare if you went to them instead.
Another aspect is that you would need an easy way to split the fare amongst multiple passengers, I almost never take a taxi if I am traveling alone.
Cabs already stream ads once the meter starts in many cities, so that's not new, nor is fare sharing - New York has had it for decades.
I'm sure any driverless cab would have the ability to ask to get out, but technically one might argue you're kidnapping someone as soon as you take them off the route they requested for any reason, if if there's an "opt-out"... at the level of an individual cab ride and human driver it's moot, but if you're operating a fleet of thousands of cabs, doing that over millions of rides, it could easily turn into a class-action suit.
What do you think a bunch of people in self driving cars with smartphones or personal wifi hotspots are going to do? Surf the internet, and therefore use more of Google's products. It's similar to Chrome or Android in that it enables more people to use Google's core services which they make money off of.
- Active internal displays that push ads based on detailed mapping information.
- Delivering you to a particular location after you look it up on Google. What's the Price per Delivery (PPD) on AdWords?
I dont work for Google, so I dont know. I'd imagine some projects aren't all about the strategic long term "money plan," rather that strategic long term "people plan." This is a cool project, and Google might just want to give talented people an outlet for their creativity. They work on this fun-project (and are having a good time so they don't want to leave Google), and they also work on another project that actually brings in money. Moreover, a project like signals prestige to outsiders (which attracts more talented people). Plain and simple
It'll give them very accurate information on where people are travelling from and to - much as Google Maps directions/navigation does, but on a much more precise and reliable scale.
Bear in mind that there's a completely alternative model for car ownership; with fully automated vehicles, there's no absolute requirement for people to 'buy' a car - there could be shared metropolitan resources of cars, which would arrive and pick you up 'on demand' (much like a shared taxi service, subsided by subscription)
Also, given that Google would probably know the locations of all the cars at any point in time, they could also get very, very detailed traffic flow information. This they could use to further optimize traffic - i.e. load-balancing traffic through different routes - but also this could potentially be sold to other transit agencies, or even tied to demographic data to suggest where stores should open new premises.
Probably some more ideas/data sources and uses in here that I haven't considered too :)
Everything except for some of the Google X Labs projects, of which this is one.
I'm going to take an optimistic tack and say they're doing this for the PR benefits of being seen as creating new technology that gives people tools they never thought they could have, somewhat like Microsoft has done with the Kinect.
Microsoft made bank with the kinect. Also, google is in serious danger of being killed by facebook (more data = better search results), so google needs another huge profit stream. Running the entire transportation infrastructure can be that (there will be competitors, but with google's superior tech talent, their vehicles will be safer and hence win).
If I were them, I would just run the vehicles as a fleet. The have many billions in cash and borrow any amount, so why not. Then innovate on the maintenance of the fleet with robots as well. Maybe start with franchising. Plus, they automatically set the rates for each individual journey dynamically and extract as much as possible.
More likely downvoted for making an unsubstantiated generalization - Google is a huge company with hundreds of products and projects, while many of them may connect to advertising / data mining, there are lots that don't.
Yeah, there is literally no chance that they're pursuing this because it will be a multi-billion-dollar-per-year industry selling hardware to people and changing the world for the better.
It must be because they have some dire plan to find out where I get my tacos!
I want to believe this is the future. Maybe not from Google, maybe from other companies, but self-driving cars are an amazing idea. Put a 100mph giant piece of metal in the hands of humans, on the other side, is not a smart idea...
Once legislation is updated to allow self-driving cars in most parts of the world, I wonder how profitable self-driving cars could be for Google.
About 77 million cars were produced in 2010. With the appropriate licensing, a percentage of that could make a very nice profit.
I should buy some Google stock :)
I don't see what Google's role is going to be, aside from pushing forward legislation for unknown reasons. The auto manufacturers don't need them for the technology and research... they've all had access to that for just as long. We'd have self-driving cars today if the public (and regulators) were ready for it.
But they're not and aren't going to be overnight; what makes more sense is to gradually introduce the features a little at a time so that the move to complete autonomy is a small step instead of a leap. First, automatic parking. Then, collision detection warnings. Next, lane assist that steers you back into the lane when you drift. In the next few years, you'll start seeing "smart cruise control" that will drive for you on well-marked roads in mapped areas.
This assumes that the path to a self driving car is a smooth function that you can incrementally follow, rather than automatic parking and collision warnings just being easy problems compared to the messiness of general driving.
No company is indispensable in the evolution of a market, but I think it is possible that abandoning the incremental approach and starting with the full problem will turn out to be the one to get us all the way there, instead of a car that can correct oversteer and warn us in 90% of situations that we're about to make a mistake. Of course, like with many products, once one group shows the way, the only thing holding everyone else back is patents and data (especially geographic in this case).
I don''t know. It is my impression that traditional companies are not able to produce complex software. I guess there are many paradigms that you have to break to become a software company. It seems to me that is the reason why cell phone companies ended up buying their O.S. from google.
Spend some time studying what current auto engines are, and how much software they embody. I'm not sure how much of it has been developed by the auto companies themselves, but there's quite a lot of it, and it's pretty robust: how ofter has your car OS crashed lately?
Modern engine control is a collection of lookup tables and feedback loops with some preprogrammed "limp" modes to fall back on if things go pear-shaped.
Robust? yes. Adaptive? yes. Complex? not really. Computer vision, road/object/hazard detection and avoidance and the like much much harder problems to solve.
I think OP defines non-traditional in this instance as software companies.
While I believe there's room for other companies to develop self-driving navigation systems, I think the Android example is pretty apt.
It's interesting to think about when self-driving cars will actually be in stores. By then, all navigation systems will meet strict safety guidelines and some standardised tests.
If all self-drivers are equally safe, the AI's driving style will become a differentiating factor. It seems to me like Google has an advantage in this type of development, although the future can certainly prove me wrong.
I think what he means is that car companies are good at making cars, but they're not as good at making the kind of AI you need for a fully autonomous car.
If autonomous cars become significantly safer in terms of the number of accidents, you could make a strong argument for lighter and lower-density of structural materials, less passenger safety features, just significantly simpler cars.
Of course that would depend on getting all the cars on the road to be self-driving - you don't want your crashless 500 pound Google car getting smashed by a behemoth human-driven Smart Car.
But if you significantly reduced the materials cost in the car, that opens a big margin for Google to trade in.
I would love to see these in a city like SF. It's a small city (geographically), but we spend $1B/year on MUNI, which carries about 700K passengers/day. Put together a fleet of, say, 5000 such autonomous vehicles that take you door to door, for the price of a MUNI ticket. Wow. That would be something.
If I do the math, you will need many more than 5000 vehicles. 700k trips with 5000 vehicles is 140 per vehicle per day, say 50 in morning traffic and 50 in evening traffic, the other 40 in-between. Even assuming 5 people in e car at each trip, that means 10 round trips from suburb to the office per car.
I guess you will need at least 100k vehicles, but with that number of vehicles, that $1B per year starts to look cheap ($10k times 100k = $1B)
Given the amount of stuff going on it made me skeptical of GM's claim they would have a self driving car available for sale in 2015.
The Google stuff is pretty cool, I notice they don't show the early videos of people running around the parking lot trying to get it to stop :-) But such things are to be expected in development. This is a technology I look forward to as it would simplify a lot of things.
> With its full 360° horizontal field of view by 26.8° vertical field of view, 5-15 Hz user-selectable frame rate and over 1.3 million points per second output rate, the HDL-64E provides all the distancing sensing data you'll ever need.
That's pretty interesting. I would have thought that they needed data at a faster rate than 15fps.
That is interesting! Looking into it, the estimated response time to an unexpected event for an average human driver is 1.9 seconds (i.e. time to notice, look, evaluate, decide, react).[1] Of that, 50 ms or 1/20th of a second goes to just processing the visual chemical stimulus into a usable signal.[Awesome Dinosaur Comics link] So if the robocar is getting a frame every 66 ms, that's probably not a significant factor making it better or worse than a human driver -- it's certainly not the key factor to optimize. On the plus side, the robocar is constantly looking in all directions rather than having to refocus on unexpected events, and can send signals to the car instantly, so it saves a couple hundred ms on each end. It probably ends up with a good bit more time than an average human to make decisions.
Of course the robocar isn't allowed to drive like an average human. It'll have to drive like a perfect human. Seems like that should be doable.
I expect the forward and rear radar, well, the forward radar, are used on the highway where faster updates are necessary.
Human reaction time is only about 5-10fps, so its not like 15fps sensor acquisition is worse (assuming the machine can react very quickly once it has sensor input.)
I wonder how long will it take till the price of something like this gets to reasonable levels. The lidar and other sensors probably won't get cheap anytime soon. Imagine the costs of Google having to drive through all the places first, mapping them. Then the massive insurance until its proven that self-driving cars are safer (and then it will probably get expensive again after a first accident)
Perhaps it would make sense for government to step in like it did with electrical cars.
The cost of the hardware, sensors and mapping is expensive (initially) if you are talking about adding the hardware to a vehicle that one person or family will own. If you are talking about adding it to each vehicle in an autonomous taxi fleet, it suddenly makes a lot more sense and economies of scale kick in quickly.
You are replacing a taxi drivers wages with sensors and mapping essentially. Not to mention extremely-well paid surgeons who clean up after drunk driving accidents, theatre nurses, insurance costs, cost of mechanics.....this technology has so many implications it's hard to visualise them all initially. It's going to be extremely disruptive to some industries, and a boon to others (e.g. where I live, rural pubs are in trouble. This could change all that - "get driven home at 200mph in perfect safety after a night getting smashed!" etc.)
The first to use this technology will be long-haul cargo companies, and it will happen when the technology will be cheaper than having truck drivers employed, and when the safety is good enough.
I imagine a scenario where trucks will drive themselves at night, over the long distances, when few people are around, and when they get close to cities, a real truck driver will jump in and navigate to the end destination, in the day, when there's a lot of people around.
And not until people are used to self-driving trucks will the tech be available for personal transportation, even though the technology will be good enough and increase safety before that.
(In the same way self-flying airplanes will come to cargo transports first, and human transport much later)
I'm doing some work at the moment using Kinects. Obviously, the range and accuracy is much reduced, but for $100, it's not bad for the price. I'd expect that LIDARs can come down a lot in price once they stop being speciality equipment and start being used in more things.
Some people have noted that personal ownership of a self driving car is non-optimal in terms of effective use of this tech. But you make an excellent indirect connection. Since these things would be be most effective with a lot of usage, electrical cars or some other renewable energy source would be ideal for the environment.
Not to speak like a party-pooper, but could someone potentially mess with this LIDAR system by aiming a very bright IR beam at the device, if not a matching laser itself?
Bringing down the cost of the LIDAR will be one major task, the other will be making this untouchable in the environment, IMO.
> Not to speak like a party-pooper, but could someone potentially mess with this LIDAR system by aiming a very bright IR beam at the device, if not a matching laser itself?
You could probably screw up the samples that would be taken from wherever your beam hits on the mirrors. But you could also shine a laser in someones eyes while they're driving and seriously impair them as well.
But what about reflective objects? The roadway is covered with different degrees of weirdly-reflective surfaces. Surely those beams land in places where they shouldn't (on the mirrors), and surely the car doesn't stop every time.
LIDAR works _through_ reflections. It essentially paints out the surrounding area with laser pulses and then measures the time for those pulses to come back. Since lasers travel at the speed of light, this time-to-return is quite small and hence they need quite accurate clocks (which is where a lot of the expense comes in).
That being said, laser range-finding works well on diffuse surfaces; this is because when diffuse surfaces reflect, they reflect the incoming light in a broad hemisphere (or cone) which sends the laser pulse out in many directions. Consequently, the surface to measure can be at a variety of angles and still be picked up by the LIDAR sensors (the sensor doesn't care about strength, only time-to-return)
So in terms of "weirdly reflective" surfaces out there, almost everything is diffuse enough for LIDAR to work well. Car hoods, carbon fiber, chrome wheel covers, etc. The only exception is glass, where generally lasers travel straight through and don't return to the sensors. So LIDAR actually detects "holes" in these situations, as if other cars were driving with no windshield an all their windows down.
So the only real risk would be a large plane of glass in the middle of the highway with completely normal road behind it. LIDAR would miss the glass, and the cameras would not be able to see it either. Most real drivers would fail at that too though :D
I guess what I meant was, surely there are weird surfaces that have multiple bounces. Or light being emitted could bounce between 2 cars and back to the sensor...or off a shop window (at a high angle, fresnel reflection), back onto something else, and back into the sensor. This data would come back into the sensor and it wouldn't be expected.
So surely the automated car, when it sees data it does not expect, does not stop, because it must see data it does not expect often through multiple bounces, right?
Sure, the situation you described certainly happens and is just considered a general noisy measurement. The car could detect empty road one second and all of the sudden some small object _right_ in front of the grill at the next frame. To avoid this "freakout" situation where the car slams on the brakes every time a noisy measurement comes in, all the data is passed through a particle filter (or Kalman filter) first before being processed by the AI.
The transition model of the cars environment is known, so it can reason that "there is a very small chance this reading represents a real object and is not noise, because i did not detect anything near this position over the last 20 frames, so I'm going to assign a very low probability to it." Hence you can clean up the data really well because you're measuring an outdoor environment, not a meteor shower (or anything else where objects could appear and disappear every frame due to high velocity).
Radar can handle that problem. To be fair, probably not the automative radars they're already using, but something in the tens of GHz range could do it.
This isn't that much of a problem in practice. As long as the surface is partially diffuse, some of the reflected light will make it back to the sensor; light that gets scattered elsewhere is inconsequential. All you have to worry about is completely specular surfaces like mirrors, because just like a camera, the LIDAR won't be able to distinguish the reflections from real objects.
The Lidar is really only a convenient system for the prototype - production units would rely mostly on imaging and limited range radar for parking / collision.
Having said that - there is no real difficulty in making Lidar very cheap if you wanted to - it's all solid state
The Lidar cost is not a concern in my opinion. What is Lidar actually used for right now? I'm guessing just military, science and research. This makes it very expensive. If there were demand for a million of them, they would cost a tenth as much-or less
40 inch flat panels used to cost $40000 in higher volumes (production quantity) than LIDARs are shipping now. Now these panels cost ~$500 (and are better).
Correct, but these autonomous vehicles can (should) also forward updates to the maps server. It's the same as with MAC address/WiFi positioning. When you enable that positioning on your smartphone, it sends a list of MAC/WiFi entities with reception quality to the central server. The server looks up where you are based on that data, but at the same time updates its lists: some routers have disappeared (people moved) and some were added. Crowdsourcing at its best.
Doubtful. Most of these companies want to own the tech. I'm actually concerned they won't be able to sell it if they tried - Chrysler is actively developing a self-driving system that they hope to deploy in 2 years.
Really? I'm no expert, but this seems like a market where it'd be natural for there to be only a few major players, rather than every manufacturer having their proprietary implementations. Why would you build an in-house system rather than buy one? Generally because buying is more expensive in the long run, because you can build a better one, or because keeping the expertise in-house allows for more chances of integration.
The expense idea makes no sense. This is software we're talking about, so the marginal cost is always going to be 0. It's also software that's likely going to have to through very expensive certification procedures to be allowed on the roads (or to be allowed by insurance companies). How could it possibly be economical for a manufacturer with 2% market share to do this on their own?
So could a smallish manufacturer at least build a better one, if not cheaper? It's hard to see why you'd expect that. Maybe if 20 of them tried, a couple of them would end up with a better technology than what could be bought. But even if that's true, it's not going to brighten the day of the remaining 18 manufacturers who ended up with substandard software. There's one obvious exception (regional differences - maybe Japanese drivers have very different preferences from German ones, or something).
Are there any integration benefits from making the software for self driving cars in-house? I can't think of any, but I'm not too familiar with the automotive industry. Maybe there's something obvious I'm missing.
Of course there's no guarantee that even if the future ends up as one of a few major in-house systems and a couple of successful publicly available ones, Google's solution would be one of the successful ones. Still seems like it's worth a shot.
There are very few standardized systems in the automotive industry, especially for "luxury items". Recently, that's changed with some manufacturers using the same undercarriage to build different cars. It would take a huge change in the mindset of a car manufacturer to buy something like this that could be a competitive advantage.
Granted, but given that most of them haven't even been able to get their in-dash software systems right (look at Ford's debacle), how likely are they to be able to make dependable self-driving cars?
Does anyone know what languages are used to build the software brains for the car? I haven't found any information on the actual software architecture, more on the hardware.
The technology is impressive, but humans drive okay without LIDAR, radar, or GPS. Maybe some day self-driving cars can drive using only two small visible-light sensors located above and behind the steering wheel. It might just be an AI challenge to operate that way, and commercial systems will always employ advanced sensors for better safety. But I don't think true parity has been reached until they can drive like we do.
But people aren't very good at driving, it would be silly to try and handicap computers with the features we evolved. Every new car made already has lots of advanced technology to make up for our two eyes.
Agreed, I don't think we should strive for parity. I'd much rather see driverless cars that are better than humans at driving, even if they need more tools to do it. The more poor drivers we can replace with driverless cars, the better.
Commercial self-driving cars should use every sensor modality which is cost effective and which improves safety. I would never argue we should hamstring our driverless cars, let's give them the best shot to be uber-safe and reliable, no question.
In the mean time, on a completely non-commercial separate track, AI researchers should try to do more with less. Driving using only visible-light sensors is a challenge. AI is pushed forward by taking on challenges exactly like this, let's see the push continue.
These two tracks may in fact intersect. When your LIDAR and radar are caked with ice and mud, you'll want the car to be able to drive visually at least to a safe stopping point.
Don't forget that humans also have a pretty good accelerometer and sound processing built-in, which helps. But it's true, we are much better at object detection and recognition.
Cars are far-and-away the biggest killer of people in my age group. I sincerely hope that we are not going for true parity, and they will not end up driving like we do.
Using a couple stereoscopic pictures to get the lay of the land is a very, very error-prone process. Humans have to make do with it because we have very limited sensory equipment available to us. Computers can and should make use of the vastly superior technologies that they have at their disposal. One of the main points to this whole enterprise is to make better, safer cars by replacing the primary point of failure behind most collisions. Deliberately crippling the system in an effort to slavishly imitate the thing it's supposed to replace would be completely missing the point.
> I sincerely hope that we are not going for true parity
Commercial driverless cars will blow humans out of the water in safety and reliability. That is the goal and should be goal. To achieve the goal multiple sensors can and should be used.
But separately AI researchers will continue to improve and evolve their algorithms. One avenue for improvement is to operate with fewer sensors. A human driver can drive passably well on any road without LIDAR or radar or GPS. In time computers can and should be able to do the same. We will benefit from that capability, even if in general driverless cars make use of other sensors.
I would love to not drive. I have a 45-minute daily commute each way, and paying attention for 45 minutes of driving is not something I look forward to. I'd rather surf the web, stare out of the windows, or just about anything else. Some people love driving; good for them. But for me, driving is just the thing you have to do to get to point B, where the interesting thing is.
We already have autonomous self driving solution to that here.
It uses all-electric vehicles and they even have a convoy mode where each car follows the lead vehicle closely to maximize the number of passengers and minimize wind resistance.
The guidance system is pretty primitive - it's all done in hardware with steel wheels on steel rails - but it makes it relatively difficult for it to go off-the-rails.
Don't anthropomorphise design. Computers are better at processing huge quantities of data in structured form. Humans are better at pattern recognition and adaption. Trying to design an autonomous robot by emulating the way humans or animals do it is a recipe for bad design.
They are taking the right track today by using a kitchen sink of radar, lidar, gps, visual. This is the way to deliver a self-driving vehicle soonest and safest.
But a self-driving car that uses only visual sensors is clearly possible in the long run. And having that technology would only benefit multi-sensor cars. What if one or more sensors breaks when you are doing 85 mph with the whole family asleep? I'd certainly welcome the resiliency to operate on less input.
Aren't you conflating two issues? Getting visual sensing to the same level as radar/lidar is a great aim. Having redundant multi-modal sensing is a great aim. Switching over to visual-only isn't.
There are too many situations where one type of sensing isn't good enough (e.g. lasers scatter off snow and can't penetrate fog/dust, radar can get saturated by multiple corner reflectors, visual sucks at night, IR sucks in bright sun, etc). To reduce cost visual-only might be a good way to go, but it won't be versatile enough to cover all the necessary scenarios.
I'm not advocating switching over the visual only, unless the other sensors are broken or unavailable as you describe.
I'm just advocating we do the research, create some visual-only cars as proof of concept, solve those thorny AI problems. It's an artificial constraint, one which will produce engineering innovations which can then be applied back to real world products.
Artificial Intelligence is a broad category -- it's not just the driving decisions to be made but also the particle filtering, car localization, policy search, object tracking, kalman filter, etc etc. The fact that the car can intelligently drive around bikers on the side of the road (and wait for an appropriate time to do so) is a significant AI challenge if it's actually broken down. It involves everything from raw noisy sensor readings to high level policy search.
Sure our eyes have great resolution and batteries-included depth perception, but they can't see around 360 degrees around the car at 15 fps. Pros & cons
I wouldn't rate particle filtering, localization, object tracking and filtering in AI. They are enablers, but not the intelligence.
Policy search is AI. =)
Eyes not only have great resolution and depth perception, but are attached to an amazing pattern processing machine that looks forward in time to estimate the next set of perceptions. They're also very environment-invariant - sun, snow, heavy rain, fog, etc would screw a camera, but human eyes can handle it relatively gracefully.
So in the US we have about 8 fatalities per 1B kilometers driven. Yes that's 8 too many, but that is a damn lot of km driven without incident. So yeah we do okay. Machines will do better, but we do okay.
Only counting the fatal accidents sounds like a mistake to me. I suspect that in many more instance, lim loss, paraplegy or even just serious material an psychological damage could be avoided by highly reliable automated driving.
Absolutely all manor of injury could be lessened or avoided by self-driving cars. And improvements made in property damage, absence from work, enforcing traffic laws, parking infrastructure, driver training, accident forensics, and on and on.
My point was just to defend humans from the hyperbole that we drive all horribly today. We have cumulatively driven trillions of kilometers. Our cities and entire economies function because for the most part when you get into a car, you can expect to arrive at your destination. This is no small feat. But yes I look forward to self-driving cars vastly improving on "okay".