Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To be fair, he was recently in the news saying self-driving cars are a scam. https://cleantechnica.com/2022/10/09/george-hotz-autonomous-...


>autonomous cars are no closer to reality today than they were 5 years ago

Why do people keep printing things like this, which are objectively wrong? I have had a completely autonomous waymo come to my location, pick me up, and take me to another location.

That didn't exist five years ago, it does exist now. How is this not "closer" than it was 5 years ago when it literally exists now, and didn't exist then?


The situation you're describing is no different to 5 years ago: autonomous vehicles exist but can only operate in a limited environment. That's where Waymo was 5 years ago, it was just an even more limited environment. Read the "Road Testing" section on Wikipedia, specifically, 2017.

https://en.wikipedia.org/wiki/Waymo


>it was just an even more limited environment

Huh, so are you saying maybe we're a little close to autonomous cars than we were 5 years ago?


sure, “no closer” is hyperbole and in the most literal sense of the phrase, we are closer because time has passed… but in the practical sense, we are closer today because testing is going well and so permission has been granted to expand testing — the technology is not meaningfully different, what’s happening today could have happened 5 years ago (if safety regulations were more lax and had permitted testing with less data).


So what have the engineers working on this been up to for the last 5 years? Nothing?


There's "asking clarify questions", then there's "intentionally missing the point to be argumentative".


If you think geofencing scales to solve the FSD problem. Otherwise no.


If one believes that autonomous cars with no limitations on environment will never exist, then no progress will get us any closer to infinity.


It's different because San Franciso is not Phoenix. It's much harder.

Do you expect to wake up one day and have self driving cars work in every city? That's just not what today's technology can accomplish. You either end up with broadly applicable L2/L3 (Tesla, Comma) driving, or you get narrow scoped L4 driving.

The scope of L4 widening is a real change.


Waymo was driving in SF 5 years ago.


Waymo driving has improved a lot in 5 years. A proof of concept is different a from production service.


True, until it can off-road in the amazon rain forest it has not improved.


The scope is larger.


I think Waymo first launched early access to nobody-behind-the-wheel hailable rides in Arizona almost exactly 5 years ago: https://www.theverge.com/2017/11/7/16615290/waymo-self-drivi...


I don't really know the state of the art in self-driving. But it might very well be that it wasn't the technology that changed in these past 5 years but the regulation around it as incremental changes were made to regular cars which have led to regulators consider that the technology was, after all, safe enough for these use cases.

Not that the technology hasn't improved, but with these things there might be many factors involved that might answer the question "why we have this today and not 5 years ago".


It exists now because you live in a place with good weather and regulators who are willing to put the live of other road users at risk for the sake of your toy.


> regulators who are willing to put the live of other road users at risk for the sake of your toy

What about the regulators that allow drunk and distracted drivers everywhere


> What about the regulators that allow drunk and distracted drivers everywhere

In what places is drunk driving legal? The laws exist and are rigorously--if imperfectly--enforced everywhere I've ever lived.

Are folks building transportation businesses employing drunk drivers?


People can drive under the influence, over the speed limit, distracted or with road rage

Vs autonomous driving where none of those impediments come into play


Irrelevant because drunk driving, speeding, and driving distracted are illegal and frequently punished. AI performs similar to drunk drivers yet is an unregulated open public beta that is sold as a product.


I think it’s the hard parts are still just as hard.


because that's like arguing that because you can now climb your way up a tree which you couldn't do five years ago you're closer to climbing to the moon.

Self-driving is still incredibly limited and progress is often overstated because people make headway on some tiny issue. A thing I always liked for people who think progress is rapid, this is Germany in the 1980s where Ernst Dickmann had autonmous cars drive thousands of miles: https://youtu.be/_HbVWm7wdmE


What?

No, it would be like if somebody said "some day we will travel to the moon", and then after the Apollo missions there were articles being published that said "we are no closer to traveling to the moon than we were 5 years ago".

https://www.youtube.com/watch?v=AHdKm0kW4l0

This is a video of a person riding in a fully self driving car.


That is not a fully self driving car in the sense of what people think when they say a fully self driving car or are you saying that I could order that car and it will drive me to New York under any conditions that a human would drive through?


I wrote the below comment, then watched the video, then deleted the comment; I say that the hard part is the human level understanding which doesn't exist. Watching the video it's notably in a clear, bright, flat, low-traffic, wide-road, few-people, few parked-cars, little going on, ideal conditions. But why hold to my position in the comment below, if that clearly is a self-driving car, just because it isn't climbing a slippery hill at dusk by people double-parked outside a nightclub with drunk people stumbling around. If it can get to useful amounts of humanless driving in real-world conditions which were not custom made for them, that has to count for something.

----

Eliezer Yudkowsky is fond of shitting on the AI developers of the 1960s for thinking they could write `APPLE` in the source code of a symbolic language and that that made them weeks away from a human intelligence which could reason about apples, and how simplistic that looks now.

Like YouTube auto-transcribed subtitles are useful but they are obviously transcribing sounds without understanding, they lack understanding of where the context indicates that a spoken thing should be a name, or they will transcribe the same word two different ways in two different sentences with no understanding that it was the same object as before being referred to again, or where a sound is unclear I can fill in what was intended but the auto transcriber can't, or I can see from lip movement that the transcription was wrong, the audio processor can't integrate multiple inputs in that way, and they will transcribe sentences which are grammatically correct but human background knowledge of the world tells you it makes no sense.

Similar with self driving cars, it's pretty clear from the outside that you can't have a car which can reason about the state of a city, its roads, the things in the roads, the environmental conditions, without having a large amount of interconnected human level background understanding of the world and the things in it. e.g. not just seeing a shape and identifying it as a cyclist, but knowing that you passed a cyclist a few seconds ago and now you are slowing down for traffic lights the cyclist will be coming back alongside you momentarily. Not just identifying a parked car, but seeing a car stop moving and turn its lights off as it parks implies the doors are about to open. Not just seeing lane markers in the road, but seeing no lane markers and being able to complete the pattern of where the lane markers should be because you understand how humans design roads. Not just seeing rain and slowing down, but the hinkiness feeling of "these conditions are dangerous" from the way other cars are driving, the road conditions, and slowing down in advance of anything objectively happening because you predict what could happen. Not just seeing a sign saying 'Diversion' but being able to look around expecting to see the next diversion route sign either down this turning or up ahead by another turning, and using that extra information to decide what to do. Not just identifying an erratically moving vehicle when you see it, but hearing a siren and seeing a flash of blue in the mirror and thinking ahead that an ambulance is coming and then looking for places to pull over to let it past and expecting the cars around you might move like that as well. Not just seeing the car in front slowing down, but seeing the driver inside it move and understanding that they ware waving you past because they are double-parking to drop someone off or pick someone up instead of slowing down because of traffic. And countless other situations.

Humans have good reaction time when it comes to touching something hot and pulling our hands away before we understand and are aware of what happened. Sensor equipped cars have good reaction time when it comes to ultrasound sensing a thing up ahead and applying the brakes without understanding what's happening. Humans have bad reaction times when driving because we can't feel the thing in the road, it has to go through our slower higher level thinking to understand what's happening before we can choose to respond.

Self-driving cars, then, are either the pretense that you can put a human level AI on top of the car's unconscious reactions, without compromising the reaction time, to get a superhuman level driver. And that's not something you can do because human level AI doesn't exist. Or they are the unfounded claim that you can drive through humanspace without human understanding, which is about as convincing as saying you can send a machine to the butcher, baker and candlestick maker to do your shopping without it having any AI. As soon as anything goes off-plan the robot is stuck. And you get into "well, we'll hard code a workaround for this situation and simply enumerate everything which could go wrong in a decision tree". Shop door closed with a sign saying "please use other door"? Hard code that, OK now are we good? Shop door closed with a sign saying "please ring bell for attention"? OK, hard-code that, now are we good? Shop door propped open with a mop and bucket and a sign saying "caution, wet floor"? OK, hard-code that, now are we good? Butcher says "sorry we have no liver but we're expecting a delivery in 5 minutes are you OK to wait?"? OK, hard-code that, now do we have AI? And then you get to Amazon which controls the warehouse layout, temperature, environment, shelving, can put tracks in the floor, put all items into regular sized boxes tagged with machine readable labels, which is more analogous to trains and trams on rails, and still Amazon use humans to pick and pack things.


"I have had a completely autonomous waymo come to my location, pick me up, and take me to another location." - Waymo uses 3d mapping, limited geofencing, remote operators and mobile roadside assistance teams because those cars are not even close to any type of autonomy. Those cars are "mice" in a well designed and designated (inch mapped) maze. The car without a driver in the driver's seat is like David Copperfield flying on the stage in a cheap magic show, in front of a few hundred people that paid $50 for the tickets - see https://youtu.be/qZS9maIq_Zc


Anyone can use 3d mapping and geofencing. That's not a disqualifier.

As long as the remote operators and assistance teams are an order of magnitude smaller than putting a driver in every car, then it's close enough to autonomy to count as "closer" and to be useful.


"Anyone can use 3d mapping and geofencing" - that shows you their limitations and also doesn't qualify for "completely autonomous" standard. - Completely means anytime (regarding weather conditions or time of the day), anywhere (no geofencing) and completely adaptive behavior to the permanently and randomly driving conditions humans deal with while driving. Pattern recognition software alone (A.I.) would never be able to match human driving performances.

"As long as the remote operators and assistance teams are an order of magnitude smaller than putting a driver in every car" - the entire gig is way to expensive and requires "time travel" level of scientific achievements, which is 100% fiction and 0% reality.


> doesn't qualify for "completely autonomous" standard

No, but it does qualify for "closer to reality today than they were 5 years ago"

> Pattern recognition software alone (A.I.) would never be able to match human driving performances.

That's okay. A trained human can do much better than necessary, and geofenced pattern recognition software doesn't have to be as good, especially because it should have better reaction times and braking force than a human.

> "As long as the remote operators and assistance teams are an order of magnitude smaller than putting a driver in every car" - the entire gig is way to expensive and requires "time travel" level of scientific achievements, which is 100% fiction and 0% reality.

Why?

If you can run a fleet of 300 cars with 30 people, that's already enough to make tons of money once you get well-established. You don't need any scientific improvements for that, let alone the ones you're exaggerating.


"No, but it does qualify for" - please check the statement my comment was responding to. The "1 step forward, 3 steps back" way the automation sector does R&D is not moving towards reality, is moving towards confusing the public to justify their pitch to eventual investors. "That's okay." - Maybe for you, but not for investors and for the market. "Why?" - It's unsustainable, requiring resources (provided at this point by naïve investors) that commercialization can't provide. Just look at the over $100 billion wasted on this hallucination with zero actual returns. Investors expect palpable returns, not promises and delays.


> 1 step forward, 3 steps back

What are the steps back?

They're slow but they're improving. And they don't need to reach their original lofty goal.

> "Why?" - It's unsustainable

Sorry, the "Why" was directed at the level of scientific achievement you claim they need.


"What are the steps back?" - every step forward, no matter in which direction, requires more computing power from a limited computing source that gets power from a limited power source (limited because they are mobile not plugged to a network). By using more computing, the system would prioritize towards the "step forward", allocating less resources to other processes (other sensors or the new electronic system of that vehicle). More computing power (when more essential processes get to have better performance) is requiring more electricity, from a solely electric vehicle with a limited battery capacity, that ultimately would generate shorter battery range available. The more computing power and more battery power you add on any vehicle, the more you increase the vehicle manufacturing or acquiring costs.

"the level of scientific achievement" - every single step, every single minute and every single individual (the financial input), is prohibitively expensive for this R&D project, and it is not justified by any means by the results (the financial output), Companies and investors don't care about progress. They care about profits, and, in case progress would stay in their path to make profits, they'll fight against it. You should check waymo salaries, hardware prices, operations costs, and fleet management costs. From operational POV, every mile covered by those vehicles translates into a price payed by the company, money that are not recovered whatsoever at this point. Vehicle lifecycle, insurance, maintenance, cleaning and the electricity used, adds up very quickly and could go as high as half a billion dollars per year - "Argo has about 1,300 employees and is likely burning through at least $500 million a year, industry participants say." (https://www.theinformation.com/articles/argo-ai-planning-pub...). Now remember how in business, any investor usually expects to make 10 times his or her investment, in this case (the Argo.ai example) meaning that the profits (after all expenses and taxes are substracted) to be around $5 billion per year. This is the reason why Ford decided to shut down Argo, which was burning half a Billion a year with no end in sight. To directly address your statement - the scientific level needed would require way too much money to justify the road to accomplish it. Basically, all those parts interested either do not have those money, or are part of a business model that requires substantial returns on a relatively short term, and cannot afford to finance projects with constantly moving delivery dates for fictional ideas.


Does it matter? They are functional and safe enough for most sunbelt cities. We may not have FSD from day one but what we do have is leagues ahead of what's possible 5 years ago.


What these failing companies are doing for almost 15 years now is 1 step forward, 3 steps back while promising they are 6 months away from the impossible. A.I. is only pattern recognition software statistical tool, that has zero capability of learning by itself from previous experience, and that shows you how any business designed around updating the constantly changing environmental data required to make the robots operate at a decent level, is prohibitively expensive.


Waymo in its current form in Arizona launched about 5 years ago.


"I couldn't get this working so nobody else can either". Classic gifted kid response to failure tbh.


Self driving car companies, not cars are the scam. And he has been saying the same thing for years.


This is not news. He is saying that for at least two years now - https://reason.com/video/2020/02/24/george-hotz-fully-self-d...


"I am a charlatan and therefore this whole industry is a scam" does not have great logic to it.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: