Hacker News new | past | comments | ask | show | jobs | submit | hankman86's comments login

I like to think that students are well advised to spend their money elsewhere. Namely to pay back any student loans. To save up for home loan deposit. Etcetera.

There are other (cheaper) ways to build a professional network. Local meetups, hackathons and whatnot. The best advice I could give graduates is to focus all of their energy on getting ahead in their career, ie. in their job at their employer. This will pay off way better than any IEEE membership could.


Does Google actually consider Go to be a success? I get the impression that it failed in what it set out to be: a successor to C/C++. Or put differently, Rust has eaten Go‘s lunch.


Go was never intended to be a C/C++ successor, it was intended to solve a class of problems that Pike and gang experienced in the software they worked on. Which is to say network servers. What you are probably remembering is that they assumed from their unique Google lens that it was C++ developers who had those same problems and would see it as an alternative to C++ when faced with those problems, but it turned out that in the "real" world people were writing network servers in Python and Ruby, not C++.

Which isn't surprising to the rest of us. If you remember the days before Go, every second thread on HN was about how "company X", including the company now known as X, saw their Ruby network servers completely falling down under load, forcing a rewrite in whatever the language du-jour was that day. But Googlers tend to live in a completely different bubble with respect to the way software is written.


Well, a lot of servers are Google are (or were) written in Java not C++. As a GCd language, arguably Go competed with Java internally moreso than C++. One of Go's flagship projects (Kubernetes) started out being written in Java for example.


If I remember the legend correctly, the idea for Go was initially conceived while waiting for a C++ program to compile. Not sure about any of the other stuff, but they definitely got the compile times right...


A goal of Go was to put working on complex distributed systems within the reach of the junior people Google had access to in the quantity they were hiring. To whit, the kind of people who would have been able to work on a big Python system with 3 months ramp up or on a big C++ system with a year of ramp up.

It is pretty clear that with respect to that goal, Go is a success. It has attracted Python programmers who need type safety and performance. Someone with no Go experience could land a useful new feature in a big Go program in 3 months.

Introducing a junior person to a large Rust system would still take a year, because it is so much more difficult than Go. Which means to me that if Rust had been aiming at this same adoption goal (it wasn't) it would not have succeeded where Go did.


> "Introducing a junior person to a large Rust system would still take a year, because it is so much more difficult than Go."

A recent study done at Google disagrees with this assessment.

""it takes about the same sized team about the same time to build it, so that's no loss of productivity",

said Google's Director of Engineering Lars Bergstrom about porting Go to Rust in the talk https://youtu.be/6mZRWFQRvmw?t=27012


His direct quote contradicts your assertion:

"When we have rewritten from Go into Rust, we found that it takes the same size of team and same time to build it."

Important part here being: rewrite. I would expect a rewrite to take less time, not the same time, than writing from scratch. Yet a Rust rewrite took as long as Go from-scratch project.

So to me, this implies the opposite, that Rust takes longer to write.


Time to build a new system and time to onboard a new team member without professional experience in a given language are 2 very difficult things.

Go is much more optimised for quick onboarding, fast feedback, more “code look” consistency across projects then rust.

Now a team that knows both rust and go well might have the same proditivity in rust and go (maybe even more in rust), but with lots of changes in personell, specifically in quick growing departments, go can make a huge difference.

This is obviously just an anecdote, but i’ve seen more companies or departments running mostly a go backend stack, having job postings saying “no go experience required”, than the equivalent other companies (or departments ) focused on any other lang.


> Introducing a junior person to a large Rust system would still take a year, because it is so much more difficult than Go.

Do you really think large Golang codebases are so easy to survey? I could see the argument wrt. C++, but Rust actually has a great featureset for programming "in the large". Take a look at the k8s codebase for an example of a large project that's written in Golang - doesn't seem all that easy to get started with.


This is an interesting question, but I doubt you will get a satisfactory answer here and probably anywhere. There is probably not even a uniform opinion about this within Google.

What I am more curious about is how much actual use Go has within companies and especially Google. What is the percentage of Go within their monorepo? How much of Google search is powered by Go?


I guess it depends on how you define success, but given there's a lot of people writing a lot of go code inside and outside of google over the last 15 years it seems likely it's doing a decent job at solving problems for those people. I'd call that a success.


It was not supposed to /universally/ replace C++.

It was supposed to replace C++ in projects where the developers would’ve otherwise reached for C++ simply because it was the default language choice.


One guy that did ML in Google told me that Google indeed tried to use Go for ML problems, then realized it's too slow and went back to C++ there.


Is this necessarily correlated? I would think that the potential to cause disaster is rather linked to whether AI is given authority to enact autonomously decisions, i.e. given access to critical systems.


if there's no benefit, AI's net cost is more than traditional systems. no benefits, no access.


Are there any independent statistics for how much AI based coding assistants like GitHub CoPilot improve an average software developer‘s productivity?

I had hoped for something around 5%, which is not massive, but still a non-trivial number.

Not sure how AI assistants in other “high value” professions far. Say in legal, healthcare, finance, civil engineering, business consulting to name a few. Does anyone have reliable figures?


It’s in maintenance mode. They may still support it for legacy Chrome apps (Chrome OS) and manifest-2 browser extensions. For the “drive by” open web, they had an origin trial for a while that you could register for and which would throw your PNaCl apps a “lifeline”.

But for all intents and purposes, WebAssembly is now on par (and beyond) what Portable Native Client supported. Some things were easier in PNaCl - threading support comes to mind.


Java Web Start always seemed like a lazy approach to bring legacy desktop apps into the web. Something you would associate with enterprise software, where end user experience apparently does not matter. I remember lengthy downloads of jar files, where eventually, some desktop window using very 1990s-looking Swing widgets would pop out. Not to mention that you needed to download a JRE (don’t remember if this was automatic on first launch).

Even in the heydays of Java, desktop applications were never all that great. I remember how painfully slow Eclipse ran. And when I tried the WebStorm IDE a few years back, all the same problems still existed re memory management and some pronounced latency pressing a key and having the character appear on screen. WebStorm was also infamous for its code indexing runs, which pretty much froze my computer for a number of minutes. Not sure if that’s a Java problem though.

People criticise Electron, but honestly: applications like Slack, Spotify and VS Code run much better than the Java-based desktop apps that I got to use.


Some JavaScript engines used to (or maybe still do) detect asm.js compliant code and then AOT-compiled it into machine instructions. You could theoretically write asm.js code by hand. But that would have been error prone and laborious. For all intents and purposes, one would transpile a native code base into asm.js using the likes of Emscripten.

There used to be a SIMD.js proposal, which was AFAIK abandoned in favor of WebAssembly SIMD. Google’s proprietary Native Client also added limited SIMD support as its last hooray.

I personally think the way to go is to improve the interoperability between JavaScript and WebAssembly instead of continuing to pursue low-level pattern like asm.js and SIMD in JavaScript. And let’s face it. For any larger code base, JavaScript is a mere deployment format, not the language you code your project in. I am still hoping for a future version of Typescript that can compile into WebAssembly.


They don't optimize asm anymore. But yes, you said what I didn't feel like typing out, except I think we should return to high performance JS.


It is not entirely clear from the article, but apparently, they still use Java for their calculation engine. And then transpile it into JavaScript. Which makes me wonder whether instead of porting this code base to WasmGC, a partial rewrite would have helped the project’s maintain ability in the long run. Rust seems like a potentially good candidate due to its existing WebAssembly backend.

WasmGC is useful for many other projects of course. But I wonder how painful it is to maintain a Java code base, which is not a great choice for client-side web apps to begin with. I remember using GWT back in the days - and that never felt like a good fit. GWT supported a whitelisted subset of the Java standard library. But the emitted Javascript code was nigh on impossible read. I don’t remember if Chrome’s developer tools already had source map support back in those days. But I doubt it. Other core Java concepts like class loaders are equally unsuited for JavaScript. Not to mention that primitive data types are different in Java and JavaScript. The same is true for collections, where many popular Java classes do not have direct counterparts in JavaScript.


Rewriting a large codebase with many years of work behind it to a totally new language is a pretty big task, and often ends up introducing new bugs.

But I do agree Java has downsides. A more practical conversion would be to Kotlin. That's a much closer language to Java, and it can interop, which means you can rewrite it incrementally, like adopting TypeScript in a JavaScript project. Also, Kotlin uses WasmGC, which avoids the downsides of linear memory - it can use the browser's memory management instead of shipping its own, etc., which is an improvement over C++ and Rust.


The code wasn't really ported to WasmGC. They now compile the same code base to WasmGC with J2CL where before they used J2CL to transpile it to JavaScript.


Somehow Java is the ultimate always the best stuff to do something even transpiled. Java is easy to build upon on the long run and can save itself and the codebase to new times.


I wonder if the rules of continuous improvement apply to ML models. It seems that regressions are very easy to pick up. And that with a complex task like FSD it is virtually impossible to use the established tools to safeguard the stack from regressions (i.e. test cases that assert some unbroken functionality).


They should teach Tesla’s “autopilot” (and its FSD upgrade) in business schools. Turns out you can sustainably push up company valuation on vapourware. You have to wonder if Tesla’s autonomous driving technology was actually ever meant to turn into a product. Or whether it is mostly a tool to justify the lofty Tesla stock price. I very much doubt that it is technologically ahead of its competitors.


> I very much doubt that it is technologically ahead of its competitors.

This is where they are as of April 2024: https://static.nhtsa.gov/odi/inv/2022/INCLA-EA22002-14498.pd...

"ODI completed an extensive body of work via PE21020 and EA22002, which showed evidence that Tesla’s weak driver engagement system was not appropriate for Autopilot’s permissive operating capabilities. This mismatch resulted in a critical safety gap between drivers’ expectations of the L2 system’s operating capabilities and the system’s true capabilities. This gap led to foreseeable misuse and avoidable crashes. During EA220002, ODI identified at least 13 crashes involving one or more fatalities and many more involving serious injuries, in which foreseeable driver misuse of the system played an apparent role. ODI’s analysis conducted during this investigation, which aligns with Tesla’s conclusion in its Defect Information Report, indicated that in certain circumstances, Autopilot’s system controls and warnings were insufficient for a driver assistance system that requires constant supervision by a human driver."


And...one more yesterday

"Tesla driver using self-driving mode slammed into police cruiser in Orange County" - https://www.latimes.com/california/story/2024-06-13/self-dri...

"Tesla in self-drive mode slams into police car responding to fatal crash": https://youtu.be/ukq6h55GnvE


> "...a critical safety gap between drivers’ expectations of the [...] system’s operating capabilities and the system’s true capabilities"

Just s/driver/user/g and it sounds like a lot of contemporary LLM hype.

IMO, Tesla's not an outlier -- in today's stock-price-is-king world, it's common to see such overselling in various domains.


Ford’s Blue Cruise is hands-off in mapped highways. Waymo is driverless. Musk is really getting away with all the hype for what Tesla cruise control delivers.


Ford's Blue Cruise is not hands-off on mapped highways, only on portions of some highways where curves are shallow. And it can suddenly demand you take over in a failure mode that completely disables itself forcing you to instantly take over in a split second.

Also it still kills people: https://www.youtube.com/watch?v=YgFPW5esM04


> And it can suddenly demand you take over in a failure mode that completely disables itself forcing you to instantly take over in a split second.

Like FSD, too, you mean.


Tesla is hands-off since a week ago (provided you are sunglass free and watching road).


It is really difficult to read if this is sarcasm or not, and I am not being sarcastic.


Tesla's most recent version of FSD (which is released to a limited number of non-employee testers so far) uses only eye tracking for driver monitoring and does not require the user to touch the steering wheel as long as they are looking forward.


Right, I was confused. "Hands-off" intially sounded like it was trustworthy to drive itself.

But in this case it means the car now trusts you to trust it by not putting your hands on the wheel?


Whole point is for two systems to monitor the road.

If you think Waymo doesn’t have thousands of people doing the same but remotely - I have bridge to sell you.


For Tesla yes, because it can't be trusted. Allowing you to not touch the wheel while still expecting you to jump in at any time isn't an improvement by any measure.

Any remote watcher can't be expected to avoid a crash in realtime.

Waymo is trusted to behave safely without supervision, but of course they monitor everything to validate and improve.


Ah, so it is just another word for negligence? Is that a feature people are championing?


One interesting bit is that recently Nvidia's CEO said Tesla is ahead of the other companies in the space. My opinion is also that Tesla isn't but then we have a connundrum. Is Nvidia's CEO just saying this because Waymo doesn't buy GPUs from them?

> “Tesla is far ahead in self-driving cars,” Huang said in an exclusive interview with Yahoo Finance.

https://finance.yahoo.com/news/nvidia-ceo-says-tesla-far-ahe...


Nvidia's CEO said that a day after Elon raised a capital multi billion dollar capital round all spent on a 100,000 gpu Nvidia cluster for another of his companies, so it could easily just be flattery.


> Is Nvidia's CEO just saying this because Waymo doesn't buy GPUs from them?

Yes.


It's amazing how little you can trust these people even though you would expect that the CEO of NVIDIA has a reputation to maintain.


Did you forget the crypto and nft hype train presentations that Jensen did ~2 years ago?


Bear in mind he was selling shovels to the miners, not shilling coins, I think?


> Is Nvidia's CEO just saying this because Waymo doesn't buy GPUs from them?

Waymo works in limited places and relies on a ridiculous number of sensors. Have you seen one in real life with all its equipment? Surely that makes it less advanced or at less ambitious. Assuming something malicious about Nvidia’s CEO seems like a big leap.


I've always wondered why adding more sensors to a self driving car was such a big deal.

Yes, it makes the cars more expensive but at some point, these parts will get mass produced and they will be cheap as hell.

The more data a car has, the better it should drive.


Once you add sensors you can't promise buyers that their car with less sensors will be just as self driving as the car with more sensors. So adding sensors is a big deal for Tesla, because they are in the market of selling a promise of self driving.


Maybe that's a harsh lesson in making promises that can't be delivered?

Or, more likely, no lessons will be learned and people will still trust what the company says in the future for arbitrary reasons.


Ambition isn't the goal, self driving is. Who cares how many sensors it has as long as it works?


> Who cares how many sensors it has as long as it works?

Funny enough, The original topic was that looking sexier was more profitable than working.


Well, if something will be prohibitively price it is just like not existing.


TIL that Porsche cars don't exist.


You know what I mean, don't catch me on words.


One is literally able to drive itself with no human. The other is advanced cruise control. This is like saying a regular plane is better than a fighter jet because it has less sensors and both can fly.


Give it some time !

When the dust settles, it will certainly be taught in business schools. And Musk will be in prison (not for FSD specifically).

I watched the shareholders meeting yesterday - it was amazing. Elon repeated all the same things, he kept telling for the past 5 years at least, none of which is close to become a reality. And none was described in any tangible detail - all very vague promises.

As for FSD, autonomy and Robotaxis, one has to remember when it was announced and promoted - when Tesla was close to bankruptcy (per Elon himself).


as GME, that trump social thing, et al. already showed it a couple of times fundamentals don't matter. Tesla is held by folks who either don't care (ETFs, institutions, hedge funds, blablabla) or Elondong lovers.


I'm not necessarily disagreeing, but, given enough capital, a lot of wild sounding things can become real. Hype is a great tool to attract capital.

Clearly Musk understands this very well and plays that game expertly.

It's really not necessary for all his promises to come true, as long as he can point to a track record of having made some of those wild things come true. So far, that's working.


sure, I mean Amazon hasn't paid dividends either, yet it's a good investment. so there are many ways to value a stock. and as Jim Simons showed the usual traders miss quite a lot.


You can’t half a half trillion market cap with a fanboy stock.


That’s why the litany of apathetic institutional investors are listed first.


Bitcoin begs to differ


Somebody should have made a prank, and arrive to the meeting on a Mercedes EQS or S-Class sedan with Level 3 autonomy....


I would be very impressed if they'll manage to do it. This so-called Level 3 "autopilot" doesn't change lanes.


"And Musk will be in prison"

Or closer to formal political power.


Yep, agree ! Sad as it is.


I sleep easy at night knowing that Elon Musk was born in South Africa and is therefore ineligible to be US president.


That is a law. Laws can be changed.

(even though it is very unlikely, but give it some years and there might be the next iteration of someone leading a state like a company trope)


While I agree that Tesla is nowhere close to having an actually autonomous driving system, I think that Tesla did invest more into research and probably collected more data than anyone else on the market. This amount of research has to have some results, even if they don't have a product yet.


Yep, because if you want something bad enough, and if it’s clearly possible, enough research will get us there! Except: commercially viable fusion, quantum computers, hyper loops, AGI, interstellar space travel. Hmmm.

That’s the problem with research; much of it turns out to be a dead-end, or exponentially more difficult as you approach the goal. FSD looked extremely likely there for a time, but I think the problem was actually AGI in disguise.


Machine-learning of any kind has this uncanny ability to get you really far with very little work, which gives this illusion of rapid progress. I remember watching George Hotz' first demo of his self-driving thing, it's absolutely nuts how much he was able to do himself with so little. Sure, it drove like a drunk toddler, but it drove!

And that tricks you into thinking that the hard parts are done, and you just need to polish the thing, fill in the last few cases, and you're done!

Except, the work needed to go from 90% there to 91% there is astronomically higher than the work needed to go from 0% to 90%. And the work needed from 91% to 92% is even higher. Partly because the complexity of the corner cases increase exponentially, and partly because everyone involved doesn't actually know how the model works. It's been hilarious watching Tesla flail at this, because every new release that promises the moon always has these weird regressions in unrelated areas.

My favourite example of complexity is that drivers need to follow not only road signs and traffic lights, they also need to follow hand signals from certain people. Police officers, for example, can use hand signals to direct traffic, and it's illegal not to follow those. I can see a self-driving system recognizing hand signals and steering the car accordingly, but suddenly you get a much harder problem: How can the car know the difference between lawful hand signals, and some dude in a Halloween police uniform waving his hands?

You want to drive autonomously coast to coast? Cool, now the car needs to know how to correctly identify local police officers, highway patrol officers, state police officers, and county sheriffs, depending on the car's location.

Good luck little toaster!


Park rangers, all the fire departments, normal people who try temporarily route traffic around something unusual like a crash, animals, hazardous conditions.

And to detect when someone is doing a prank or just a homeless guy yelling and waving their fist at cars etc


One of the original overpromises from Musk was that you could definitely totally summon your car from NY to LA and it would magically drive all the way, next year, for sure.

Yeah, because if it understands hand gestures, it totally won't be used by criminals, directing it to a chop shop where they can disable it and cut it to pieces. What are you gonna do as the owner?


safest bet you can make - no one old enough to have HN account today will live to see anything even closely resembling FSD


It already exists in Waymo. It obviously has a limited ODD but it absolutely works and easily passes “closely resembling FSD” for most real use cases (I.e. getting to work, school, and the store and back)


Cathy Wood of Ark investments just gave Tesla a $2000 price target on the back of FSD taxi service... That's around a $5T valuation.

You have to wonder if she is dumb, or just knows Tesla investors are totally delusional.

Or perhaps, when you come upon an OG delusional musk worshipper, and call them out, they can point at their money pile and call you the idiot...


>... the 65-year-old divorced mother of three is a devout Christian who starts every day by reading the Bible while her coffee brews, and who relies on her faith during testing moments, such as the many market upheavals...

God will make it go to $2000.


I really think she is a hack


> Turns out you can sustainably push up company valuation on vapourware.

It is very nearly standard practice in startupland to describe a yet-to-be-developed product in the present tense prior to/during development.

Strictly speaking, it’s lying/fraud, but it is so pervasive and widespread as to be expected and could rightly be called standard industry practice.

This is in no way a Tesla-specific thing.


It's weird how people keep calling this vaporware when it actually works, is active on roads, and is used by tens of thousands of people. That's the strangest usage of the term I've ever seen. Vaporware is the term used to describe products announced that never make it to market in any form.


> when it actually works*

*: The definition of "work" includes veering incl. but not limited to other vehicles, road shoulders or road divisions, sometimes self stabbing the car incl., but not limited to its driver with road railings or other roadside objects. The car might catch fire as a result or independent of the event if its feelings are hurt, or just feels like it, and burns for days, releasing its densely packed magic smoke, sweat, blood vapor and condensed tears of its designers and builders. The fumes might be toxic. Please don't inhale them.


Did someone give you the impression that cars without FSD have ever been safe?

https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in...

Most dangerous way to travel, full stop. FSD or not. I don't think a perfect safety record is possible. Only better than what people currently accomplish given the inherent unsafety of the whole system. If safety were a top priority, the cars would be on rails.


> Did someone give you the impression that cars without FSD have ever been safe?

Did I say anything resembling or implying that? I don't think so.

> Most dangerous way to travel, full stop.

I love a quote from a famous driver, paraphrasing: "Racing is some people knowing what they're doing driving in a closed circuit. Traffic is the same, but with people who don't know what they're doing".

On top of that, I had enough incidents to know that what humans can do in traffic. They make good stories though.

> I don't think a perfect safety record is possible.

Me, too.

> Only better than what people currently accomplish given the inherent unsafety of the whole system.

I think cars with driver monitoring is more secure than cars with FSD or hands-free driving. I love to drive cars with lane hold, adaptive cruise and driver monitoring, because these systems improve safety and augment humans at the same time.

I don't believe that AI and/or computer vision is close to matching human perception and reasoning to handle a 2ton steel box like humans. Augmenting humans' capabilities is a far safer and reliable (if not unsexier) way.

> the cars would be on rails.

I love trains to death, but they're not perfect either.


Fake it til you make it is a fundamental principle of startups. We just don’t usually see it at such a vast scale.

There’s a timeline where Theranos was acquired for 9b by UnitedHealth if they could keep the grift alive juuust a bit longer and Elizabeth Holmes ascends to the tech firmament permanently while her enablers congratulate each other.

Tesla has even more and deeper financial and branding defense mechanisms. That said, the clock is ticking, now, I think


> Elizabeth Holmes ascends to the tech firmament permanently while her enablers congratulate each other.

Holmes and at least some of her supporters still ardently insist, to this day, now that everything is out of the bag, the "pulling filing cabinets in front of doors to specific labs on FDA inspection days so they only see the labs we want them to" crap, all of it, that she, and humanity, have been robbed of the truly magnificent biomedical advances that Theranos was just about to solve.


Wouldn't United Health have been in the same position as HP when they acquired Autonomy for $10b in 2011 on the basis of their cooked books?


But it's... out right now. You can literally use it today.


And it sort of works in limited cases.

FSD is like ChatGPT, it works in many cases, it does some mistakes, but it is certainly not “useless”. It won’t replace full time humans yet (the same way that ChatGPT does not replace a developer) but can still work in some scenarios.

To the investor, ChatGPT is sold as “AGI is just round the corner”.


But "works in limited cases" is absolutely not enough, given what it promises. It drove into static objects a couple of times, killing people. Recent videos still show behavior like speeding through stop signs: https://www.youtube.com/watch?v=MGOo06xzCeU&t=990s

Meaning that it's really not reliable enough to take your hands off the wheel.

Waymo shows that it is possible, with today's technology, to do much much better.


It's not enough for robotaxis yet, and Tesla doesn't claim that it is. They just think they'll get there.

What they do claim is that with human supervision, it lowers the accident rate to one per 5.5 million miles, which is a lot better than the overall accident rate for all cars on the road. And unlike Waymo, it works everywhere. That's worthwhile even if it never improves from here.

Fwiw you can take your hands off the wheel now, you just have to watch the road. They got rid of the "steering wheel nag" with the latest version.


Well the recent NHTSA report [1] shows Tesla intentionally falsified those statistics, so we can assume Tesla-derived statements are intentionally deceptive until proven otherwise.

Tesla only counts pyrotechnic deployments for their own numbers which NHTSA states is only ~18% of all crashes which is derived from publicly available datasets. Tesla chooses to not even account for a literal 5x discrepancy derivable from publicly available data. They make no attempt to account for anything more complex or subtle. No competent member of the field would make errors that basic except to distort the conclusions.

The usage of falsified statistics to aggressively push product to the risk of their customers makes it clear that their numbers should not only be ignored, but assumed to be malicious.

[1] https://static.nhtsa.gov/odi/inv/2022/INCR-EA22002-14496.pdf


> It's not enough for robotaxis yet, and Tesla doesn't claim that it is. They just think they'll get there.

"By 2019 it will be financially irresponsible not to own a Tesla, as you will be able to earn $30K a year by utilizing it as a robotaxi as you sleep."

This was always horseshit, and still is:

If each Tesla could earn $30K profit a year just ferrying people around (and we'd assume more, in this scenario, because it could be 24/7), why the hell is Tesla selling them to us versus printing money for themselves?


They do plan to run their own robotaxis. But there are several million Teslas on the road already. They're just leaving money on the table if they don't make them part of the network, and doing so means they have a chance to hit critical mass without a huge upfront capital expenditure.


The product doesn't work until the human can be human instead of telling followers only subhumans complain.

Being more specific: Product either requires a certification, like a driving license, or is foolproof.


> it works everywhere.

where there's enough bandwidth

> you just have to watch the road

... and then react in a split second, or what? it's simpler to say goodbyes before the trip.

> They just think they'll get there.

of course. I think too. eventually they'll hire the receptionist from Waymo and he/she will tell them to build a fucking world model that has some object permanence.


There's no bandwidth requirement, it runs locally on the car.


The driving into static objects thing is horrible and unacceptable, I agree. As I understand, this occurred because Autopilot works by recognizing specific objects: vehicles, pedestrians, traffic cones - and avoiding those. So if an object isn't one of those things, or isn't recognized as one of those things, and the car thinks it's in a lane, it keeps going.

Yes, it was a stupid system and you are right to criticize it. And as a Tesla driver in a country that still only has that same Autopilot system and not FSD, I'm very aware of it.

But the current FSD is rebuilt from the ground up to be end-to-end neural, and they have the occupancy network now (which is damn impressive) giving a 3d map of occupied space, which should stop that problem occurring.


> Meaning that it's really not reliable enough to take your hands off the wheel.

Soooo just like ChatGPT then? As the parent comment said.


There are no FSD deaths. Only old Autopilot ones.



Has Tesla actually stated that in a clear manner? They seem a bit cagey about such data.


I think this is one of my fave FSD predictions:

https://www.huffingtonpost.co.uk/entry/tesla-driverless-cars...

Oct 2014: "Five or six years from now we will be able to achieve true autonomous driving where you could literally get in the car, go to sleep and wake up at your destination."


And waymo got there, in limited conditions


Indeed. "Limited conditions" being the issue here (and as per the article).


That’s metaphysics.


I guess the difference is ChatGPT is less likely to cause death if it makes a mistake.

Users generally have time to decide if the output ChatGPT provides is accurate and worth actioning.


At this point, I'd be surprised if ChatGPT has not yet given someone a response which caused them to make a mistake that resulted in a death.

We found out about the lawyers citing ChatGPT because they were called out by a judge. We find out about Google Maps errors when someone drives off a broken bridge.

https://edition.cnn.com/2023/09/21/us/father-death-google-gp...

For other LLMs we see mistakes bold enough that everyone can recognise them — the headlines about Google's LLM suggesting eating rocks and putting glue on your pizza (at least it said "non-toxic glue").

All it takes is some subtle mistake. The strength and the weakness of the best LLMs is their domain knowledge is part way between a normal person and a domain expert — good enough to receive trust, not enough to deserve it.


If ChatGPT emits a fragment of code that doesn’t compile, a developer can simply undo and try again.

No such luxury is granted to the driver using FSD who has just collided with another vehicle.


Or produces code that compiles but is subtly wrong it probably won't kill someone, well until we start developing safety critical systems with it. One day we might have only developers that can't actually write code fluently and we'll expect them massage what ever LLMs produce into something workable. Oh well.


Very low bar to have code that just compiles.


Not much better than assisted driving, which actually says what it does.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: