Hacker News new | past | comments | ask | show | jobs | submit login
Cruise vehicle gets stuck in wet concrete while driving in San Francisco (sfgate.com)
35 points by CaffeineSqurr 9 months ago | hide | past | favorite | 168 comments




A quick search on Google News shows at least 3 human drivers did the same thing in the last 3 months, so this doesn't bother me all that much.


I'm not saying this is damning evidence, but how many human drivers are on the road versus AI? A quick Google search says there are 300 Cruise vehicles operating at night currently. I am confident I could survey 10 times that many taxi drivers and not find a single one who has accidentally driven into wet concrete.


The problem with that analysis is that "driving into wet concrete" is not even in the top 1000 problems affecting road users.

It is non-fatal for starters, which makes it in a separate category completely from the worst idk ~200 or so problems on roadways.

Ask how many of those taxi drivers how many have interacted with their smartphone while driving, and you will find that 100% of them have been part of a common roadway problem that leads to fatalities.


That's the difficulty with n=1

Was this a freak accident, or indicative of expected performance.

I guess we'll only find out if more Cruise cars turn themselves into modern art.


True, but presumably Cruise can fix this issue. Can we fix human drivers?


And over the course of a year their fleet has collectively driven somewhere around a million miles. Human drivers on US roads complete over 3 trillion miles per year.

So, one Cruise vehicle driving into concrete is about equivalent to 3 million human drivers driving into concrete on an equal rate basis.


Discrete events and very small numbers make it hard to draw good conclusions, and you shouldn’t extrapolate like that. With the exact same logic, you could have argued yesterday that Cruise vehicles are less likely to drive into concrete than humans with an extrapolated rate of zero! This claim is just as suspect as the other — n=1 just isn’t giving you enough data.


I was simply entertaining the 'statistic' posed by the parent at face value. But you're right, we don't know whether or not this is an outlier event.


You can set an upper limit with zero events.


1)Absolute counts are meaningless given the wildly disproportionate population sizes. We're talking many orders of magnitude difference here. Cruise operates 100 vehicles during the day, 300 at night.

2)Absolute counts worldwide are meaningless when comparing against crashes in SF.

3)You picked an arbitrary period

4)Whether it makes the news is not reliable for statistical comparison. Millions of crashes go unmentioned. Many of them don't even result in a report to any officials.


"car stuck in concrete" on google image search shows a lot of human-driven carnage.


Driverless car from Cruise. Headlines every day about autonomous driving mishaps from Tesla & Cruise, seemingly never from Waymo.

Does this hurt or help Waymo? They're being painted with the same brush, but perhaps they can leverage that into regulatory capture. When the inevitable regulations come along they'll likely be set to "difficult, but achievable". Which at this point feels like "Waymo pass, Tesla & Cruise fail".


The article does mention that a Waymo car was also involved in an incident.

> In May, a driverless Waymo car blocked a fire vehicle while it was backing into a station.


The scary part of this is it went through construction cones and flaggers with stop signs to end up in the concrete. Basically, this thing went right through a construction zone without stopping.

Granted, people are this dumb every single day. But who is liable for the fines here?


YES

The problem here is dispersed responsibility so no one is responsible

The result is that no one takes responsibility for not actually creating blatantly dangerous events, such as blowing through cones, signs and other obvious indicators of a construction / no-driving zone and endangering construction workers & passengers (cones also initially mark a washout/bridge-out problem). To the software developers and managers, it is just some lower-priority edge case they'll handle later. To the corporate promoters, it's just a cost to be externalized on the public. To the regulators, obviously, they don't want to be seen as "Luddites", so they just approved MORE, not less, of these obviously-not-self-driving cars.

If a person actually had both personal responsibility (e.g., they will get fined or potentially imprisoned for approx. negligent homicide if death results) and authority (e.g., they can stop deployment until it is truly fixed after a car mistakes a truck for the sky and decapitaties the driver or blows through a construction zone) to prevent these situations, they would not happen as much. Who should have such responsibility and authority? Start with the CEO and board.

EDIT: stray words, tenses, added examples


Do we know that the paving zone was properly signed/controlled? The single photo in the article has a cone on the outside of the zone, but we don't see the beginning of the zone.


Even if the cones weren't perfect, a flagger should be sufficient for traffic control. It's same as if there was a police officer directing traffic in an intersection. Both are explicit overrides to posted signs.

Same with emergency vehicles, which self-driving cars have already been shown to ignore.


Sure, but we don't know if there was a flagger either. Only a single construction worker standing to the side.

I'm not debating whether an AI car should "see" possible construction, even when not marked appropriately (most humans can too). Only pointing out we don't know if this zone was marked/flagged - and if it wasn't, there's a higher likelihood of human driver error as well.


> Rachel Gordon, a spokeswoman for the San Francisco Department of Public Works, said that the paving project on Golden Gate Avenue had been marked off with construction cones and that there were workers with flags at each end of the block.


That's from the New York Times article: https://www.nytimes.com/2023/08/17/us/driverless-car-acciden.... The linked article is currently from SFGATE. It may have changed since you made this comment.


Yes, this is an issue self-driving cars. Roads and signage are not unchanging and always set up proper or the same way every time, everywhere, nor do they need to be.

Light control at an intersection out? They'll just blink red, indicating a stop sign. Maybe they're off, and they put up temporary stop signs on the corners, or as you mentioned, a police officer redirects traffic. And then a fire engine will run through it, as they need to.


Well, if a corporation is technically a person, I would say the owner of the driverless car. Assuming it is owned by a corp...


I hope fines will eventually get adjusted upward if they’re given to a company. A $10,000 fine would convince almost anyone never to plow through a construction zone, but the same fine for a company would only get them to set a quarterly target for incident count.


The owner of the car initially. The passenger(s) and maker may also be liable depending on circumstances.


Under what circumstances would the passenger be liable? They have no control over the vehicle so I fail see how an event like this would be their fault.


If you hire someone to perform a service for you, and during the performance of the service they cause someone harm, you can definitely be sued civilly, or even be criminally liable (to take an exaggerated example, consider murder for hire).

No idea how that would apply specifically here though.


Was fun to google image search "car drives into wet concrete." Humans are worse drivers than I thought!

https://www.google.com/search?sca_esv=557782243&sxsrf=AB5stB...


I've walked over wet concrete. It wasn't really signed very clearly (there were some cones on the corners, but that doesn't automatically mean "wet concrete"), it's really not that obvious unless you're looking for it, and I was late and in a hurry.

Looking at those pictures, I think loads of people could have made that mistake for quite a few of them, as it often doesn't seem indicated very clearly, and in some situations going through the wet concrete was the only way the car could go from a parked position.


What I'd like to know is if the concrete was marked out with cones.

If it was then clearly the car's fault, if not then the worker's fault. Cones become more critical with the proliferation of non-human drivers.


We expect human drivers to attempt to avoid accidents which are not their fault (see: legal duty to lookout/avoid), why shouldn't we do the same for non-human drivers? To not do this seems like a step backwards in safety expectations.


But human drivers have similar accidents.

The good news is they can fix the self-driving software then replicate the solution forever in millions of cars.

Now, how to prevent the small number of humans who do the same thing?


If you are saying that the car bares some responsibility for avoiding this, and should be fixed, then we agree.


We can entertain that second hypothetical once the first one's been covered yeah?


I was going to add that cones are just as important for human drivers but I thought it was obvious. I don't think human drivers can detect wet concrete (or other hidden hazards) any better than machines.

These sorts of safety issues have been a always been a problem, with lax markings, etc, but now are more acute with non-human drivers.

I think ultimately it raises the question of whether it's reasonable to offer some accommodations to non-human drivers where they don't perform so well. Do they have to be at the standard of human drivers for everything? That's probably impossible.


Why? The entire argument for self driving cars is that they are safer drivers than humans. If that claim breaks down whenever the driving conditions are anything other than perfect, then self driving cars lose their only supposed benefit. If we have to substantially change everything we do on or around the road to simply help them match human efficacy then (1) what is the point of a "self driving" car that can trivially encounter breaking conditions and (2) if these changes would be beneficial safety why haven't we already done them for human drivers?

Also the picture in the article shows both cones and workers, the presence of either would generally cause a human to alter how they drive.

What we see consistently is self driving cars that randomly stop in perfectly reasonable driving conditions, and similarly we see repeated cases of them driving through clearly unsafe areas. If a single human driver repeatedly made these same mistakes they would lose their license, but for some reason we allow it from these unsafe self driving systems. If a car manufacturer ships a car that is found to operate unsafely, the manufacturer is required to recall them, and isn't allowed to return the car to market until the problem is demonstrably fixed.


> If that claim breaks down whenever the driving conditions are anything other than perfect, then self driving cars lose their only supposed benefit

False dilemma: value from self driving cars is a sliding scale, not all or nothing.


Ah yes, it is clearly incumbent upon all other industries to modify their practices to pave over the gross deficiencies of self-driving cars.


It's fine by me if self-driving cars need help and get it, so long as the technology is beneficial. I think it is.


Ok, on what grounds is technology that performs a job that humans can generally do better at beneficial in the broad sense? That it's benefitting a handful of individuals who profit off eliminating jobs isn't in question, but where's the upside for the rest of us?


My mom (who is legally blind) wouldn't have to resort to driving herself to her doctors appointments when myself and my brother are unable to due to work.


For me it's safety (each problem solved gets applied to others, but it could be more profound than that, like chess AIs) and potential to decide on the fly whether to have something similar to a rental car or a much lower rate. For instance maybe I could know a car would be waiting for me and leave my backpack behind while I go on a hike. Plus environmental and city planning hopes.


The upside is that automobile crashes kill >30,000 people in the US alone and self-driving cars may one day bring that number down by 90%


Might they? Is that a purely faith-based assertion or is that claim grounded in anything? Because current performance suggests otherwise.


Current performance definitely does not suggest otherwise! Over Waymo's first million miles, it was involved in two collisions, one of them at-fault. The average for human drivers is 5.3 collisions per million miles.


Oh no you don't. Waymo describes it's tech as Level 4 automation. That is an explicit statement that it underperforms human capabilities and has only been greenlit for -very- limited areas and conditions. Industry reports on the subject suggest Level 5 autonomy won't be possible for at least the next 12 years, which is synonymous with saying the industry currently has no idea how to get there.


It doesn't matter if it's only greenlit for limited areas and conditions -- just use them where they work! Don't use them where they don't!


Enormous amounts of time are wasted commuting. The passengers benefit from self driving cars, not just the auto maker.


So you're saying a handful of randos having a little spare time to catch up on Reddit during their commute is a net plus trade for putting mediocre operators on the road? That's definitely a take.


"A little spare time" is more than two hours a day for a lot of people.


That's a lot of Reddit.


If they don't end up distracting incessantly with ads


Properly marking a construction zone isn't a change in practice - it's already required. But that doesn't mean it was done properly in this case.

If a construction zone is unmarked, human drivers are more likely to drive into it as well.

Anyway, AI cars should be smart enough to "see" a potential unmarked construction zone, just as most humans can. I'm only saying construction zones should be marked explicitly for the safety of all involved.


> Cones become more critical with the proliferation of non-human drivers

Ironic. Cones are the self-driving car’s only weakness, and yet they can’t survive without them


first of all in law there is no such term like "car's fault" or "cow's fault". It's always the owner


I once saw a women drive into a light rail construction site where her car was driving on the rail tracks which were held up by poles. She apparently thought that was how she could also cross the vast chasm that was the construction pit. Unfortunately humans have nothing on the self driving cars in this.



At least going by the r/selfdriving cars subreddit, it seems like Cruise cars get into a lot more 'incidents' than Waymo cars. Especially getting randomly stuck because they don't know what to do.


One of the articles about this past weekend's Cruise traffic jams mentioned the company blamed a flakey cell phone signal as the root cause. Now why would having perfect access to cell networks be required to keep the car driving... I suspect because Cruise is relying much more heavily on remote human drivers. It would not surprise me to find out Cruise is just a big scam and the self driving algorithm fails so often the cars are basically just remote piloted taxis.


fwiw, my personal experience testing both has been that Waymo's vehicles are noticeably better at driving, and also drive more aggressively. It's to the point where sitting in a Waymo vehicle is unremarkable and I don't even pay attention to its driving, while Cruise still typically has at least 2 "that wasn't great" moments per ride.


Sfgate had some stats recently that align with this. Though from what I understand there are more cruise cars than wayno cars on the streets.



This link just put me in a captcha loop on iOS Safari. Anyone else? Never had a problem with archive.is links.


> Never had a problem with archive.is links.

I've been having that problem with archive.is links for weeks.


Same happened to me using Brave on desktop


Same with Brave and Firefox on desktop


Worked for me on iOS/Safari


Same here, Brave on Android


Same on firefox/osx


Same on Chrome/osx


"No cipher overlap" on Firefox 102.12.0 ESR on Slackware 15 :/


How badly protected does a piece of fresh concrete have to be for this to be possible? Never heard of this happening before.


People try to drive on wet concrete all the time. It's super bad for the car, especially if you let it dry.


But why aren't there barriers etc around it? Honestly I can't recall ever seeing anything like that.


It's not 100% perfect but I still get butterflies when I see a Cruise vehicle driving around on its own.


It’s called “primal fear”


I dunno I've been using FSD over a year now and I still get giddy like I'm watching a DARPA challenge car in the university computer lab every time. Some of us are just excited and not scared by the future?


It's pretty trite to try to frame individuals healthy skepticism of self-driving tech as fear of some nebulous future outcome. The tech continues to demonstrate deficiencies -right now-, in this time. All evidence (and the industry's own reporting) suggest that parity with human drivers capability is nowhere on the horizon, and may not be credibly possible with current implementations. That being the case proponents now get to explain on what grounds drivers are supposed to celebrate introduction of additional mediocre operators to our streets.


The comment I replied to literally said "primal fear"

I'm not framing anything...

People had "primal fear" of seatbelts when they were introduced, even though the statistics, like those of AV, prove their absolute benefit in the preservation of life.


Meanwhile self driving cars get stuck in wet concrete.


Apparently they can also be captured in a ring of salt: https://laughingsquid.com/performance-artist-uses-solid-and-....


This specific scenario doesn’t seem like a damning indictment of the tech, you can absolutely see how it could happen. But on the other hand it speaks to the “death by a thousand cuts”-yness of it all. How many other weird little edge cases exist out there for drivers? Countless. You’re going to struggle to program solutions to every one.


Yeah, the infinitely long tail of exceptional circumstances mean that automated driving is an AGI-complete problem. These exceptions are fairly rare, but once there are many driverless vehicles they’ll be more common. Vehicles will need to learn and problem solve to keep up.


1000s of exceptional circumstances, but once added to training data, is solved forever.


You say that with the air of confidence of someone that believes their test cases cover all possible circumstances. I know for a fact that none of the large, complex real world systems I've worked on have ever had complete test coverage.


Only if they are discretely identifiable, in a wide range of conditions, in a short enough time, and do not conflict with other circumstances...


> On Friday night, as many as 10 Cruise driverless cars stopped working near a music festival in San Francisco’s North Beach, causing traffic to back up, according to The San Francisco Chronicle, which reported that the company had blamed “wireless connectivity issues.”

They just stop if they lose service?


It's definitely safer than going on blind. Wet concrete might be the least bad situation.


I mean, that's a good failsafe.


And if they lose connection on a foggy morning on a busy street and cause a pile up?

I agree that "stopping" should happen. But, they need to stop safely (pulled onto the shoulder/parking lane/low volume side road), not just abruptly stop in the middle of the street, blocking the rest of the traffic.[1]

1 - https://www.wsmv.com/2023/08/15/driverless-cars-stall-causin...


It's a good failsafe if there's a human driver there to take over. I don't think they should be on the road unsupervised if they can't operate without connectivity.


At a bare minimum, the onboard systems should know how to "pull over" like a human driver would if their vehicle was malfunctioning.


It seems like Cruise is speedrunning all the mistakes that human drivers make.

The question is whether they learn from it.


From reading news stories like this it seems like roadworks in some places in the USA aren't cordoned off as clearly as they are UK. I'm not sure if this is an accurate assessment or not. But unless the car drove through a cone, maybe it wasn't cordoned off sufficiently?


This must've been proposed, but why don't we ticket Cruise as a single "driver"? The number of infractions they've had would certainly get their license suspended.


Reminds me a bit of the Onion clip "High Unemployment Rate Linked to One Man with 42,000 Jobs"

https://www.theonion.com/high-unemployment-rate-linked-to-on...

EDIT: or, more appropriately, the case of Prawo Jazdy: an alleged Polish national with hundreds of speeding tickets (Prawo Jazdy is Polish for driving licence)

http://news.bbc.co.uk/1/hi/northern_ireland/7899171.stm


It depends on your goal. Does aggressively ticketing new teenage drivers lead to better driving?

There is a huge potential benefit. If the cars are going to improve they need to be tested in the real world. So maybe some forbearance is justified?


Ticketing and suspension for repeat offenses is an appropriate response to a single teenage driver or single self driving system. Cruise needs to spend more time in simulation. They are at least a few years from being ready for the road.


This is good, this is how the AI learns and gets better.

Just like a learner driver who stalls the engine, forgets to put on a turn signal and can't keep up with traffic.

We've all got to learn sometime.


Plus there is an opportunity for all the cars to learn the same lesson at once and forever.

Also humans can learn from this too. It’s not hard to imagine a new type of safety cone that has an explicit sign for autonomous vehicles to stop far far away.


>It’s not hard to imagine a new type of safety cone that has an explicit sign for autonomous vehicles to stop far far away.

Two things:

The promise of self-driving cars is that they can operate themselves in an environment not specifically designed for them. If that promise is broken, and it's the position of the industry that things like traffic signals, construction, emergency services, road markings etc need to be modified for functionality with self-driving cars then I think public will to accommodate them dries up quickly.

That being said, I can imagine having a lot of fun with a physical token that can remotely stop an autonomous car.


> It’s not hard to imagine a new type of safety cone that has an explicit sign for autonomous vehicles to stop far far away.

Car companies will never build in failsafes like that, it would be too easy for anyone to put up those signs anywhere. Cars and streets are made for human drivers, there is no way around it and no alternative to making your car AI as good as human drivers. If you can't do that, you can make a much simpler train "AI" and put it on tracks. We did that in the 60s.


> It’s not hard to imagine a new type of safety cone that has an explicit sign for autonomous vehicles to stop far far away.

Isn't that kind of defeating the point of autonomous vehicles if we need to build a world that will accommodate them?


The point of autonomous vehicles is to not have to totally reshape the world before being able to accommodate them. But small shifts like adjusting signs to also be easily read by autonomous vehicles aren't that crazy.

It'd just be done such that new signage is made to be easily readable, so that over time, as old stuff is replaced, the signage becomes more readable for machines.

This sort of thing can also benefit regular vehicles.


It's corralling in the possible range of the vehicles - anywhere without the enhanced signage will have to be no man's land. If an autonomous car drives to a small town that doesn't have the enhanced cones and gets into an accident at a construction zone whose fault is it?

I'm not sure what "This sort of thing can also benefit regular vehicles." means in this instance. If the cone is machine readable how does that help me?


Enhanced signage doesn't mean that the cars shouldn't be able to read regular signage, it just means that future usages of signage will make the task easier and lower risk. As such, locations without the signage aren't "no man's land". Would be similar to how many places aren't outright banning ICE cars, just banning the sale of new ones at some point in the future. Allowing for a comparatively smooth transition.

The cones being machine readable would allow regular vehicles to also be given the ability to read them more reliably (and potentially with less equipment than needed by a current autonomous vehicle), so, they could do things like warn the driver, which could be valuable in reducing the cases of distracted drivers slamming into closed off lanes.


> Would be similar to how many places aren't outright banning ICE cars, just banning the sale of new ones at some point in the future. Allowing for a comparatively smooth transition.

This comparison doesn't make much sense. We're highlighting a dangerous failure in capability here, not a desire to change out the engine. Either engine works perfectly fine, it's secondary properties that we care about with transition. It's a primary responsibility of an autonomous vehicle to be able to identify and react to obstacles.

We either need the enhanced cones or we don't, but only a fraction of cones being enhanced means they aren't actually that useful or that the situations without them are inherently more dangerous. Are we OK with a situation where autonomous vehicles identify the enhanced cones 90% of the time but the non-enhanced 50%? No, we want both to be very high.


>We either need the enhanced cones or we don't, but only a fraction of cones being enhanced means they aren't actually that useful or that the situations without them are inherently more dangerous. Are we OK with a situation where autonomous vehicles identify the enhanced cones 90% of the time but the non-enhanced 50%? No, we want both to be very high.

I agree that we should strive to improve the vehicles' ability to detect even unenhanced cones. Doesn't mean that we can't also aim to gradually improve detection accuracy further with enhanced cones. That is, if autonomous vehicles identify regular cones correctly 98% of the time, but enhanced cones can be identified 99.99% of the time, we should be okay with a gradual transition to the latter, even though tautologically it means that the former is less safe.

We already do this with all sorts of things, safety standards improvements often have at least a grace period during which the comparatively less safe things are still allowed to exist and operate alongside the safer ones.


>Plus there is an opportunity for all the cars to learn the same lesson at once and forever.

I dont believe thay every car maker will not try to invent their own models


Only if the incident was avoidable, which isn't clear from the article/photo.


> We've all got to learn sometime.

The lesson should be that San Francisco isn't the right place to run a highly disruptive experiment.

Do these experiments where the potential for disruption is a lot lower, like a smaller city with less congestion.


How many learner drivers do you think hit the streets of San Francisco every week?

Should they be banished too?


People are not machines. Machines are not people.

But I'll take your bait:

I grew up close to a city, (Worcester MA,) with plenty of weirdo streets like San Francisco, and had plenty of teenage friends who lived in the city or travelled through it.

Needless to say, we didn't get behind the wheel for the first time in Worcester. We stayed in calmer streets until we were ready. When we drove in Worcester with learner's permits, we had adult supervision. Afterwards, we were only allowed to drive without adult supervision once we passed the driver's test. (I should also point out that, at the time, Worcester had a very serious problem with unlicensed drivers.)

Clearly, Cruise's vehicles aren't ready to drive in San Francisco without adult supervision.

But let's get back to the thesis: It is highly inappropriate to equate a machine with a human. Machines have no rights whatsoever. The "We've all got to learn sometime" attitude doesn't apply to this discussion, because an autonomous car is not a human.

What instead applies to this argument is conventional engineering and business development: Where is it appropriate to run Cruise's early marketing experiments, where bugs (and other unanticipated behavior) is expected? What are the appropriate limitations to put on vehicles when they have no driver? For example, what if the first use of the vehicles was a private resort (like Disney World) where the nature of a malfunctioning vehicle is easier to absorb because there are very few passenger vehicles.


> When we drove in Worcester with learner's permits, we had adult supervision. Afterwards, we were only allowed to drive without adult supervision once we passed the driver's test.

Am I to assume you never made a single mistake while doing so?

Of course not. Learner drivers stall, they forget turn signals, they don't keep up well with the flow of traffic. That is fine, that's all part of learning and perfectly normal.

> Clearly, Cruise's vehicles aren't ready to drive in San Francisco without adult supervision.

I don't think that is clear at all. Sure, they're making mistakes, but so did you and I and everyone else that ever learned to drive.

> But let's get back to the thesis: It is highly inappropriate to equate a machine with a human. Machines have no rights whatsoever. The "We've all got to learn sometime" attitude doesn't apply to this discussion, because an autonomous car is not a human.

I don't think that has anything to do with it. Driving a car is not a "right" that all humans have, it's a privilege that can be quickly taken away.

Everyone is afforded the opportunity to learn at some point, and we need self-driving vehicles to have that opportunity too. Fast forward 20 (or whatever it is) years to when self-driving vehicles are much, much safer, they mean long-haul truckers don't have to be away from home in an unsafe and unhealthy profession, they mean less vehicles on the road, etc. etc.

Society NEEDS those benefits, and if we never give the self-driving vehicles the opportunity to learn, we'll never get there.

> What instead applies to this argument is conventional engineering and business development: Where is it appropriate to run Cruise's early marketing experiments, where bugs (and other unanticipated behavior) is expected? What are the appropriate limitations to put on vehicles when they have no driver? For example, what if the first use of the vehicles was a private resort (like Disney World) where the nature of a malfunctioning vehicle is easier to absorb because there are very few passenger vehicles.

Absolutely, those are very important discussions to have. They are being had, by people in a position to make those decisions. Like a million things in our society, if you don't like the decisions they're making, you need to get yourself into that position.


Again, the whole thesis of your argument is that you're equating robotic cars with humans. Your statements are irrelevant because robotic cars are not humans.

For example: Yes I did stall, but I did not block traffic for 20 minutes. I started the car and moved the shifter from 3rd to 1st. I did not require outside intervention to move my car.

I am also a person with fundamental human rights. A self-driving car has no rights, and deserves no empathy.

Which is why I'm trying to refocus the discussion back to "tech."

For example, when we talk about disruptive tech: The early customers need to be willing to put up with the "faults." In this case, San Francisco's emergency response departments aren't willing to put up with these faults.

Likewise, when we talk about choosing a market for early markets: That means making sure the market is well-chosen to suite the capabilities of the tech. One of the issues in San Francisco is that demand for self-driving cars outstrips the capabilities at this point. IE, it's better to choose a place where Cruise can meet the demand.


> I am also a person with fundamental human rights. A self-driving car has no rights, and deserves no empathy.

Are you saying that driving a vehicle is a fundamental human right?

Becuase it is absolutely 100% not.

> In this case, San Francisco's emergency response departments aren't willing to put up with these faults.

What are you talking about? San Francisco's emergency response departments absolutely ARE putting up with these faults, and the regulators and people who make decisions about if they should or should not be on those streets are deciding they should be.

I understand the self-driving cars can be inconvenient right now, but that always happens when you're aiming for improvement or progress. When lanes get added to a road the traffic suffers during construction. When you renovate your kitchen it's painful to live in during the work, etc. etc. Just because it's inconvenient doesn't mean you shouldn't do it - the eventual improvement will be worth it.!


I think you've missed my point: San Francisco is a poor choice for the earliest market of Cruise.

This argument comes from my interpretation of two books: "Crossing the Chasm" and "The Innovator's Dilemma."

Cruise can choose any market they want: any city, any municipality, any closed road network like a resort. I keep arguing that San Francisco is the wrong choice right now.

Choosing an early market is an important step in developing a technology business. It doesn't matter how wonderful your technology is, your company needs to succeed in its first market in order to move to another market and then to larger markets. Furthermore, your customers must be willing to put up with the bugs and shortcomings of your product compared to existing technology.

If a company overestimates the maturity of their product, they can lose the goodwill of the future customers who don't want to put up with bugs or other shortcomings.

I think the big problem that Cruise is facing is that they will be regulated out of existence before they fully debug their product. (For the sake of argument,) if they had used resorts as their first customer, They wouldn't have to worry about being regulated out of existence because they would only have to deal with a government; and they would have had a much easier situation to debug the product.


> I think you've missed my point: San Francisco is a poor choice for the earliest market of Cruise.

There are plenty of regulators and people who's entire job is to figure out if it makes sense in their city or not. Those people are in charge of making that decision for San Francisco, not you.


From yesterday:

DMV tells Cruise to reduce its driverless vehicle fleet in SF by 50%: https://www.sfgate.com/tech/article/firetruck-siren-lights-o...

Proves my point that San Francisco is a horrible place to test market driverless cars.

> In a statement released Friday night, the DMV said it is investigating “recent concerning incidents” involving Cruise's vehicles in San Francisco. In addition to halving the number of vehicles in the city, the DMV called for Cruise, a subsidiary of General Motors, to have no more than 50 vehicles operating during the day and 150 vehicles driving at night until the agency's investigation is complete.

> As SFGATE previously reported at the time of the Aug. 7 meeting, which featured a slide deck on autonomous vehicle performance in the city, “In 2023, the department logged about 50 incidents involving AVs that nearly crashed into personnel, obstructed travel or blocked stations, per the presentation — and five more reports were written up over the weekend.”

> Nicholson provided a statement in response to Chiu’s move, sent to SFGATE on Thursday by the city attorney’s office. “We do not believe the industry has any incentive to remain at the table and solve their problems,” she said. “These incidents with Public Safety are not going away and are in fact increasing.”


I think you have a very poor comprehension of the issue; and the ability to discuss opinion in an online forum.

As a city, San Francisco has a lot of citizen initiated ballot questions. (When I lived in the city they had an initiative to rename the sewage treatment plant after George W. Bush.)

I wouldn't be surprised if an upcoming ballot has an initiative to ban self-driving cars.


Sidenote: I'm glad the title explicitly says "Cruise vehicle" and not "Self driving car". Cruise and Waymo are very different.


But if you look at the statistics of human-driven cars getting stuck in wet concrete this is actually 34.7% better /s


Human abilities are typically gonna fall into a normal distribution. There’s definitely an opportunity for driverless cars to have a much thinner dangerous end of the tail. You can’t fix stupid, but you can limit an algorithm…


I always wonder what actual native SF residents think vs what people in other cities without AI cars think.


This might be a naive question, but are driverless lanes being considered? For instance we have bike lanes all over the US. It might make sense to make a similar investment for AI-friendly lanes to help transition to driverless transportation.


It'll be a dark, sick twist that driverless cars will get their own protected lanes in many American cities before bikes do.


Counterpoint: traffic congestion is almost entirely due to individual driver differences. An automated lane would be vastly more efficient and could potentially lead to fewer lanes being needed for vehicles altogether. This is assuming that most cars would be automated so I'm talking about a more distant future, but I see that as more realistic than convincing America to abandon it's car addiction.


Making it easier to take a car places has always caused an increase in driving. I don't see a reason that this would change with increases in automation. It's an additive solution (like feature bloat, for example) when we need subtractive solutions (better products)


Isn't San Francisco very hilly? That's pretty bad for biking, it's a waste to reserve lanes for bikes if nobody will use them.


Meh they're not really bad for biking, and you can also just walk your bike up the hill, take a street car, or since you are probably using a bike with battery assist you can just ride up the hill.

There are plenty of options that negate this as a concern (actually it's not even a concern), especially when the alternative is big government autonomous vehicle welfare shoving ads up your ass and nickel-and-diming every move you make to participate in society.

Want to protest something? Sorry the government shut down your ride. Whoops you were mistaken for a terrorist - straight to jail! Did you forget to pay your Waymo bill? Guess you're stuck at home until you get the funds to pay it. Heading to the airport? Sorry sir you can't exit the car while the car is in motion, 40 minutes ETA until you arrive at your destination. But I can walk, it's right there! Sorry sir for your safety and those around you please enjoy this new Netflix special while you wait. You can purchase this bottle of condensed milk for $7.99 and it'll automatically be deducted from your paycheck.

Car-only infrastructure isn't just a really stupid and expensive way of doing things, it also keeps you under control.


Why though? We have plenty of bike lanes in my city and I see hundreds of cars for every bike I see. Usually those bike riders are either kids who ride on sidewalks or dudes in spandex who don’t seem to be going anywhere.


> This might be a naive question, but are driverless lanes being considered? For instance we have bike lanes all over the US. It might make sense to make a similar investment for AI-friendly lanes to help transition to driverless transportation.

That idea doesn't make sense. Where would the space for the dedicated lanes come from, especially on city streets (like the one in the OP)? The existing traffic could use those lanes more effectively, and it would be wrong to waste it because driverless cars suck and their developers are insistently shoving them out the door despite that.


They are called train track.


Well I mean yeah, one of the benefits of trains is how efficient the track system is. So in a sense yes I'm talking about extending the notion of a train track and applying it to commonly used roads. If we get to a point where the majority of vehicles are capable of being driverless, then an automated lane would be much more efficient and would probably reduce traffic congestion.


Doing something like that takes massive public investments in changing our roads and infrastructure. Given that this money will come from the public, I would argue that it should be spent on publicly accessible tools and services, like enhanced public transit. I do not want to spend public money to build public infrastructure for the gain of massive private companies' experiments.


I mean, maybe? But it could be something as simple as (a) pick a designated color like purple or something, (b) paint a purple line in that lane, (c) make it a law that you can't operate the vehicle yourself while you're in that lane.

I don't mean doing this today. I'm talking about a future where most vehicles have a driverless capability. The reason why it would be useful is that a lane of automated cars would be safer and more efficient. But the reality we have right now is that driverless cars are intermixed with human-driven cars, which will always be a danger.


Disregard the price of your purple line, for I, a taxpayer, paid for the road you wish to paint. My car registration fees paid for it too. Don't dare assume it's OK to prevent me from driving on this public infrastructure. A couple year ago, politicians reduced a central thoroughfare through our city from two lanes to one, where a dedicated lane now exists for busses. This was profoundly unpopular and has increased congestion. Which of the one remaining lanes would you like to paint purple now?


Expand your horizon farther into the future. Imagine a world where the majority of vehicles have driverless capabilities. If you were to reserve a lane for automated traffic, it would significantly improve congestion and could lead to fewer lanes being needed at all.


I don't think it would make sense. Dedicated lanes still need to close or have construction sometimes, and you still have to get out of the dedicated lane at some point and drive to your final destination. It wouldn't solve any problem, it would just make drivers even more complacent and unlikely to act when their car gets out of the dedicated lane and crashes into something.

Bike lanes make sense because bikes can't go as fast as cars and are unprotected in the event of a collision. Bus lanes make sense because buses have dedicated predictable routes, and efficient bus lanes increase the number of people that can get around.


There are bike lanes all over the US? Since when?

Also, isn't Musk setting up Hyperloop, or was that another publicity stunt?


Since the past decade or so, but especially the past couple of years: https://nextcity.org/urbanist-news/how-five-u.s.-cities-buil...

Re. Hyperloop: some have said it was a stunt to attack public support for CA high speed rail (which would reduce CA's dependency on cars): https://twitter.com/parismarx/status/1571628269555826688?lan...

EDIT: Re-reading your comment, I realize that you're challenging GP's assertion that we already have bike lanes everywhere, which I agree is obviously false.


67 miles per city.

For reference, two of the cities metioned are Austin with ~2500 miles of roads and Denver with 1900 miles.

It's a start. But that's all it is.


Why would we want to spend so much money to transition to driverless transportation? Just take PT if you don't want to drive.


Not OP, my issue with public transport is mostly the "public" part. At least on the DC metro, its not uncommon to see people getting harassed and MPD doing nothing about it. I've personally seen a drunk man pissing in the corner and watching it get carried by momentum through half the car. Sure, it's probably like a 1/50 trips you see something bad happening... but its still enough for me to prefer personal transportation.

The other half is the stuff you can't do, like carry large items or eat on the train. Just got some large boards for a DIY project and stopped by in-N-out. ~10 minutes there and back. Would have taken an hour w/PT, couldn't carry materials and couldn't stop for food. With a driver-less car, I could still do that.

I'm not suggesting any money go towards paying for driver-less lanes atm though. Still feels way too early for that.


The answer to those problems is more public transit (in the US, it's mostly working class - if we make it normal across the socio-economic spectrum, people will demand more of it), more bike lanes, and safer spaces for pedestrians.

If it took you ~10 minutes to get to the store, it's probably ~2 miles by car. Maybe a bit further. That's easily done on a bicycle (not for large boards, but easily doable for moderate grocery runs or other regular errands). Or, a bus/tram - might take a few minutes more waiting for it to arrive at each end, but still not an hour (if it runs regularly enough, which it obviously doesn't).


>If it took you ~10 minutes to get to the store, it's probably ~2 miles by car

I hate to be the bearer of bad news, but its closer to 7. Speed limit is 70 mph for most of the drive, haha.

Even if it was 2 miles, that's gonna be close to 30 min each way, an hour spent just on commuting for groceries. I don't think I could justify it tbh, and I like biking enough. I'm usually just carrying stuff though, if I'm leaving the house to go shopping.

If it was going to/from the exact place, perhaps. It's just so convenient to grab the keys and not have to worry about schedules or arranging how you'll carry everything back.

Perhaps if I lived in a city and it was my only choice though.


I fail to see how more public transit would solve the drunk-guy-pissing-in-the-corner issue.


If public transit is normalized (used by the majority of people, particularly those with money and some political power), those people will demand enforcement of "don't piss in the corner" laws. The problem today is mass transit (outside NYC and a few other places) is largely used by working class with neither the means, power, or time to push for change.


It's not bad reasoning, though I just want to point out that NYC has tons of public transit, used by both working class and non-working-class people, and those problems persist.


Well, that stinks. I haven't been to NYC since I was a kid.

How do other countries solve this? Is it just an off-hours thing, or a problem mid-day?


The DC metro is also used by lots of "elite" types.

The problem is nobody does anything because nobody else cares enough, tbh.

Nobody cares enough to call the police because they know the police won't do anything. The police won't do anything because they'll arrest people and those people will be let out a few days later with no punishment. Why? Ask the elite representatives in DC I guess. They probably just don't care to solve it or don't want to solve it. There was a dude who vandalized a store and stole some stuff in DC, got arrested, let out 2 weeks later and literally 2 days after being released did the same exact thing.

I've unfortunately seen creeps doing awful things in front of innocent people too and its one of those things that nobody will spend time tracking down I guess.


I'm all for public transport, and I don't think this will happen but...

Because driverless cars are safer, faster and more efficient.


...than what, regular cars, or public transit?

Good public transit is WAY safer and more efficient than cars of any sort. It could be faster, too, if we prioritized it over cars like much of Europe has.


> more efficient.

Beg your pardon? How is 2.5 tons of steel to carry a handful of people more efficient than a tram?


Private trams? Sounds worse than private cars to me.

Public trans? I already said I'd support that, but it's not on the menu is it.


We absolutely do not have bike lanes all over the US, most cities are barely getting started adding them


I didn't say we had reached a saturation point, I'm merely pointing out that we've mustered up the investment to build them.


I'm not trying to argue, it's just that the claim "bike lanes all over the US" is not remotely true no matter how you slice it


Glad they were able to get Paul Harvey's take on this


this is the kind of human interest story i log into hn for each morning


Still safer than human drivers, who do this all the time


Uh, I know hundreds of drivers, and I've never met someone who's done this. Who are you hanging around?



If you want to play that game now we get to compare the number of human operator hours per wet pavement incident vs self-driving cars. A quick google search suggests the estimated number of human drivers on the road today is 1.4 billion. The rest of that calculation is left as an exercise.


If you do a true apples-to-apples comparison between Waymo and humans, the waymo cars are about five times safer.

> Accidents per million miles driven for Waymo self-driving cars are 0.59, compared to the general U.S. rate of 2.98

https://blog.gitnux.com/self-driving-cars-safety-statistics/


Pfft. How much of Waymo's million was done on unfamiliar roadways or in inclement weather? Does it snow much in Phoenix?


The top 5 results of this search are driverless car examples for me...


I wonder if this had anything to do with them being desensitized to cones because of the cones on the hoods...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: