Hacker Newsnew | past | comments | ask | show | jobs | submit | xnx's commentslogin

Do picture rails work for gallery walls (clusters of frames)?

Fascinating. I wonder if supply constraints will make drywall recycling profitable.

Cerebras is a bit of a stunt like "datacenters in spaaaaace".

Terrible yield: one defect can ruin a whole wafer instead of just a chip region. Poor perf./cost (see above). Difficult to program. Little space for RAM.


They claim the opposite, though, saying the chip is designed to tolerate many defects and work around them.

Or Google TPUs.

TPUs don't have enough memory either, but they have really great interconnects, so they can build a nice high density cluster.

Compare the photos of a Cerebras deployment to a TPU deployment.

https://www.nextplatform.com/wp-content/uploads/2023/07/cere...

https://assets.bwbx.io/images/users/iqjWHBFdfxIU/iOLs2FEQxQv...

The difference is striking.


Oh wow the cabling in the first link is really sloppy!

It feels like we're a week away from the Claw hype supplanting AI hype. Companies will start renaming things ClawX to get on the hype bandwagon.

I think "claw" isn't appealing enough to be genericized and that "agent" will continue to be the generic term, but we'll see.

It's just an abstraction people are excited about at the moment. Langchain was an exciting abstraction at one point.

My bet is we converge on a super minimal model<>computer architecture.


Google is absolutely running away with it. The greatest trick they ever pulled was letting people think they were behind.

Their models might be impressive, but their products absolutely suck donkey balls. I’ve given Gemini web/cli two months and ran away back to ChatGPT. Seriously, it would just COMPLETELY forget context mid dialog. When asked about improving air quality it just gave me a list of (mediocre) air purifiers without asking for any context whatsoever, and I can list thousands of conversations like that. Shopping or comparing options is just nonexistent. It uses Russian propaganda sources for answers and switches to Chinese mid sentence (!), while explaining some generic Python functionality. It’s an embarrassment and I don’t know how they justify 20 euro price tag on it.

I agree. On top of that, in true Google style, basic things just don't work.

Any time I upload an attachment, it just fails with something vague like "couldn't process file". Whether that's a simple .MD or .txt with less than 100 lines or a PDF. I tried making a gem today. It just wouldn't let me save it, with some vague error too.

I also tried having it read and write stuff to "my stuff" and Google drive. But it would consistently write but not be able to read from it again. Or would read one file from Google drive and ignore everything else.

Their models are seriously impressive. But as usual Google sucks at making them work well in real products.


I don't find that at all. At work, we've no access to the API, so we have to force feed a dozen (or more) documents, code and instruction prompts through the web interface upload interface. The only failures I've ever had in well over 300 sessions were due to connectivity issues, not interface failures.

Context window blowouts? All the time, but never document upload failures.


I'm talking about Gemini in the app and on the web. As well as AI studio. At work we go through Copilot, but there the agentic mode with Gemini isn't the best either.

Honestly this is as Google product as you can get. Prizes for some, beatings for others.

I've used their Pro models very successfully in demanding API workloads (classification, extraction, synthesis). On benchmarks it crushed the GPT-5 family. Gemini is my default right now for all API work.

It took me however a week to ditch Gemini 3 as a user. The hallucinations were off the charts compared to GPT-5. I've never even bothered with their CLI offering.


It’s all context/ use case; I’ve had weird things too but if you only use markdown inputs and specific prompts Gemini 3 Pro is insane, not to mention the context window

Also because of the long context window (1 mil tokens on thinking and pro! Claude and OpenAI only have 128k) deep research is the best

That being said, for coding I definitely still use Codex with GPT 5.3 XHigh lol


Antigravity is an embarrassment.

The models feel terrible, somehow, like they're being fed terrible system prompts.

Plus the damn thing kept crashing and asking me to "restart it". What?!

At least Kiro does what it says on the tin.


My experience with Antigravity is the opposite. It's the first time in over 10 years that an IDE has managed to take me out a bit out of the jetbrain suite. I did not think that was something possible as I am a hardcore jetbrain user/lover.

Have you tried Cursor or VS Code with Github Copilot in agent mode (recently, not 3 or 6 months ago)?

I've recently tried a buuuuunch of stuff (including Antigravity and Kiro) and I really, really, could not stomach Antigravity.


It's literally just vscode? I tried it the other day and I couldn't tell it apart from windsurf besides the icon in my dock

Yeah same here. Even though it's vscode I'm still using it and don't plan to renew Intellij again. Gemini was crap but Opus smashes it.

It is windsurf isn't it, why would you expect it to be different?


How can the models be impressive if they switch to Chinese mid-sentence? I've observed those bizarre bugs too. Even GPT-3 didn't have those. Maybe GPT-2 did. It's actually impressive that they managed to botch it so badly.

Google is great at some things, but this isn't it.


It's so capable at some things, and others are garbage. I uploaded a photo of some words for a spelling bee and asked it to quiz my kid on the words. The first word it asked, wasn't on the list. After multiple attempts to get it to start asking only the words in the uploaded pic, it did, and then would get the spellings wrong in the Q&A. I gave up.

I had it process a photo of my D&D character sheet and help me debug it as I'm a n00b at the game. Also did a decent, although not perfect, job of adding up a handwritten bowling score sheet.

100x agree. It gives inconsistent edits, would regularly try to perform things I explicitly command to not.

Sadly true.

It is also one of the worst models to have a sort of ongoing conversation with.


I don't have any of these issues with Gemini. I use it heavily everyday. A few glitches here and there, but it's been enormously productive for me. Far more so then chatgpt, which I find mostly useless.

Agreed on the product. I can't make Gemini read my emails on GMail. One day it says it doesn't have access, the other day it says Query unsuccessful. Claude Desktop has no problem reaching to GMail, on the other hand :)

And it gives incorrect answers about itself and google’s services all the time. It kept pointing me to nonexistent ui elements. At least it apologizes profusely! ffs

Their models are absolutely not impressive.

Not a single person is using it for coding (outside of Google itself).

Maybe some people on a very generous free plan.

Their model is a fine mid 2025 model, backed by enormous compute resources and an army of GDM engineers to help the “researchers” keep the model on task as it traverses the “tree of thoughts”.

But that isn’t “the model” that’s an old model backed by massive money.


Uhh, just false.

It's just poop tier.

Come on.

Worthless.

Do you have any market counter points.

Market counter points that aren't really just a repackaging of:

  1. "Google has the world's best distribution" and/or  
  2. "Google has a firehose of money that allows them to sell their 'AI product' at an enormous discount?
Good luck!

These benchmarks are super impressive. That said, Gemini 3 Pro benchmarked well on coding tasks, and yet I found it abysmal. A distant third behind Codex and Claude.

Tool calling failures, hallucinations, bad code output. It felt like using a coding model from a year ago.

Even just as a general use model, somehow ChatGPT has a smoother integration with web search (than google!!), knowing when to use it, and not needing me to prompt it directly multiple times to search.

Not sure what happened there. They have all the ingredients in theory but they've really fallen behind on actual usability.

Their image models are kicking ass though.


Peacetime Google is not like wartime Google.

Peacetime Google is slow, bumbling, bureaucratic. Wartime Google gets shit done.


OpenAI is the best thing that happened to Google apparently.

Just not search. The search product has pretty much become useless over the past 3 years and the AI answers often will get just to the level of 5 years ago. This creates a sense that that things are better - but really it’s just become impossible to get reliable information from an avenue that used to work very well.

I don’t think this is intentional, but I think they stopped fighting SEO entirely to focus on AI. Recipes are the best example - completely gutted and almost all receive sites (therefore the entire search page) run by the same company. I didn’t realize how utterly consolidated huge portions of information on the internet was until every recipe site about 3 months ago simultaneously implemented the same anti-Adblock.


The search product become useless on a particular day of 2019 as discussed on HN News some time ago:

https://news.ycombinator.com/item?id=40133976


Competition always is. I think there was a real fear that their core product was going to be replaced. They're already cannibalizing it internally so it was THE wake up call.

Next they compete on ads...

Wartime Google gave us Google+. Wartime Google is still bumbling, and despite OpenAI's numerous missteps, I don't think it has to worry about Google hurting its business yet.

I do miss Google+. For my brain / use case, it was by far the best social network out there, and the Circle friends and interest management system is still unparalleled :)

Google+ was fun. Failed in the market though.

Apple made a social network called Ping. Disaster. MobileMe was silly.

Microsoft made Zune and the Kin 1 and Kin 2 devices and Windows phone and all sorts of other disasters.

These things happen.


Windows Phone was actually good. I would even say that my Lumia something was one of best experiences ever on mobile. G+ was also good. Efficient markets mean that you can "extract" rent, via selling data or attention etc. not realy what is good

I have a hypothesis that Google+ just wasn't addictive. Which is a good thing now, but not back then

But wait two hours for what OpenAI has! I love the competition and how someone just a few days ago was telling how ARC-AGI-2 was proof that LLMs can't reason. The goalposts will shift again. I feel like most of human endeavor will soon be just about trying to continuously show that AI's don't have AGI.

> I feel like most of human endeavor will soon be just about trying to continuously show that AI's don't have AGI.

I think you overestimate how much your average person-on-the-street cares about LLM benchmarks. They already treat ChatGPT or whichever as generally intelligent (including to their own detriment), are frustrated about their social media feeds filling up with slop and, maybe, if they're white-collar, worry about their jobs disappearing due to AI. Apart from a tiny minority in some specific field, people already know themselves to be less intelligent along any measurable axis than someone somewhere.


"AGI" doesn't mean anything concrete, so it's all a bunch of non-sequiturs. Your goalposts don't exist.

Anyone with any sense is interested in how well these tools work and how they can be harnessed, not some imaginary milestone that is not defined and cannot be measured.


I agree. I think the emergence of LLMs have shown that AGI really has no teeth. I think for decades the Turing test was viewed as the gold standard, but it's clear that there doesn't appear to be any good metric.

The turing test was passed in the 80s, somehow it has remained relevant in pop culture despite the fact that it's not a particularly difficult technical achievement

It wasn’t passed in the 80s. Not the general Turing test.

c. 2022 for me.

Soon they can drop the bioweapon to welcome our replacement.

Not in my experience with Gemini Pro and coding. It hallucinates APIs that aren't there. Claude does not do that.

Gemini has flashes of brilliance, but I regard it as unpolished some things work amazingly, some basics don't work.


It's very hard to tell the difference between bad models and stinginess with compute.

I subscribe to both Gemini ($20/mo) and ChatGPT Pro ($200/mo).

If I give the same question to "Gemini 3.0 Pro" and "ChatGPT 5.2 Thinking + Heavy thinking", the latter is 4x slower and it gives smarter answers.

I shouldn't have to enumerate all the different plausible explanations for this observation. Anything from Gemini deciding to nerf the reasoning effort to save compute, versus TPUs being faster, to Gemini being worse, to this being my idiosyncratic experience, all fit the same data, and are all plausible.


You nailed it. Gemini 3 Pro seems very "lazy" and seems to never reason for more than 30 seconds, which significantly impacts the quality of its outputs.

I'd personally bet on Google and Meta in the long run since they have access to the most interesting datasets from their other operations.

Agree. Anyone with access to large proprietary data has an edge in their space (not necessarily for foundation models): Salesforce, adobe, AutoCAD, caterpillar

What is their Claude code equivalent?


They seem to be optimizing for benchmarks instead of real world use

Yeah if only Gemini performed half as well as it does on benches, we'd actually be using it.

It was obvious to me that they were top contender 2 years ago ... https://www.reddit.com/r/LocalLLaMA/comments/1c0je6h/google_...

Gemini's UX (and of course privacy cred as with anything Google) is the worst of all the AI apps. In the eyes of the Common Man, it's UI that will win out, and ChatGPT's is still the best.

Google privacy cred is ... excellent? The worst data breach I know of them having was a flaw that allowed access to names and emails of 500k users.

Link? Are you conflating with "500k Gmail accounts leaked [by a third party]" with Gmail having a breach?

Afaik, Google has had no breaches ever.



Google is the breach.

Their SECURITY cred is fantastic.

Privacy, not so much. How many hundreds of millions have they been fined for “incognito mode” in chrome being a blatant lie?


> Their SECURITY cred is fantastic.

In a world where Android vulnerabilities and exploits don't exist


Google's most profitable branch is adsense, they don't need breaches for them to have privacy issues given that elephant sized conflict of interest.

If you consider "privacy" to be 'a giant corporation tracks every bit of possible information about you and everyone else'?

OpenAI is running ads. Do you think they'll track less?

They don't even let you have multiple chats if you disable their "App Activity" or whatever (wtf is with that ass naming? they don't even have a "Privacy" section in their settings the last time I checked)

and when I swap back into the Gemini app on my iPhone after a minute or so the chat disappears. and other weird passive-aggressive take-my-toys-away behavior if you don't bare your body and soul to Googlezebub.

ChatGPT and Grok work so much better without accounts or with high privacy settings.


I find Gemini's web page much snappier to use than ChatGPT - I've largely swapped to it for most things except more agentic tasks.

> Gemini's UX ... is the worst of all the AI apps

Been using Gemini + OpenCode for the past couple weeks.

Suddenly, I get a "you need a Gemini Access Code license" error but when you go to the project page there is no mention of this or how to get the license.

You really feel the "We're the phone company and we don't care. Why? Because we don't have to." [0] when you use these Google products.

PS for those that don't get the reference: US phone companies in the 1970s had a monopoly on local and long distance phone service. Similar to Google for search/ads (really a "near" monopoly but close enough).

0 - https://vimeo.com/355556831


You mean AI Studio or something like that, right? Because I can't see a problem with Google's standard chat interface. All other AI offerings are confusing both regarding their intended use and their UX, though, I have to concur with that.

The lack of "projects" alone makes their chat interface really unpleasant compared to ChatGPT and Claude.

No projects, completely forgets context mid dialog, mediocre responses even on thinking, research got kneecapped somehow and is completely uses now, uses propaganda Russian videos as the search material (what’s wrong with you, Google?), janky on mobile, consumes GIGABYTES of RAM on web (seriously, what the fuck?). Left a couple of tabs over night, Mac is almost complete frozen because 10 tabs consumed 8 GBs of RAM doing nothing. It’s a complete joke.

Fair enough. I'm always astonished how different experiences are because mine is the complete opposite. I almost solely use it for help with Go and Javascript programming and found Gemini Pro to be more useful than any other model. ChatGPT was the worst offender so far, completely useless, but Claude has also been suboptimal for my use cases.

I guess it depends a lot on what you use LLMs for and how they are prompted. For example, Gemini fails the simple "count from 1 to 200 in words" test whereas Claude does it without further questions.

Another possible explanation would be that processing time is distributed unevenly across the globe and companies stay silent about this. Maybe depending on time zones?


AI Studio is also significantly improved as of yesterday.

Gemini is completely unusable in VS Code. It's rated 2/5 stars, pathetic: https://marketplace.visualstudio.com/items?itemName=Google.g...

Requests regularly time out, the whole window freezes, it gets stuck in schizophrenic loops, edits cannot be reverted and more.

It doesn't even come close to Claude or ChatGPT.


Once Google launched Antigravity, I stopped using VS Code.

Smart idea to say anything against Google here from a throwaway account, I'm sitting in negative karma for that :')

Anti Google comments do pretty well on average. It's a popular sentiment. However, low effort comments don't.

Those black nazis in the first image model were a cause of inside trading.

I'm leery to use a Google product in light of their history of discontinuing services. It'd have to be significantly better than a similar product from a committed competitor.

Google is still behind the largest models I'd say, in real world utility. Gemini 3 Pro still has many issues.

They were behind. Way behind. But they caught up.

Trick? Lol not a chance. Alphabet is a pure play tech firm that has to produce products to make the tech accessible. They really lack in the latter and this is visible when you see the interactions of their VP's. Luckily for them, if you start to create enough of a lead with the tech, you get many chances to sort out the product stuff.

You sound like Russ Hanneman from SV

It's not about how much you earn. It's about what you're worth.

Don't let the benchmarks fool you. Gemini models are completely useless not matter how smart they are. Google still hasn't figure out tool calling and making the model follow instructions. They seem to only care about benchmarking and being the most intelligent model on paper. This has been a problem of Gemini since 1.0 and they still haven't fixed it.

Also the worst model in terms of hallucinations.


Disagree.

Claude Code is great for coding, Gemini is better than everything else for everything else.


What is "everything else" in your view? Just curious -- I really only seriously use models for coding, so I am curious what I am missing.

Role-playing but Claude is as bad, same censored garbage with the CEO wanting to be your dad. Grok is best for everything else by far.

Are you using Gemini model itself or using the Gemini App? They are different.

Both

And mathematics?

Waymo is absolutely delighting in their luck that Elon is so stubborn that he has kept Tesla from being anywhere close to catching up.

According to Elon, "sensor ambiguity" is a danger to the process [1], and therefore only a single type of sensor is allowed. (Conveniently ignores that there can be ambiguity/disagreement between two instances of the same type of sensor)

The fact that people still trust him on literally anything boggles my mind.

[1] https://x.com/elonmusk/status/1959831831668228450


Sensor fusion allows you to resolve that ambiguity, I wonder if Elon is really as in touch with this as you would expect. No single sensor is perfect, they all have their problematic areas and a good sensor fusion scheme allows you to have your sensors reinforce each other in such a way that each operates as close as possible to their area of strength.

No single sensor can ever give you that kind of resilience. Sure, it is easy in that you never have ambiguity, but that means that when you're wrong there is also nothing to catch you to indicate something might be up.

This goes for any system where you have such a limited set of inputs that you never reach quorum the basic idea is to have enough sensors that you always have quorum, and to treat the absence of quorum as a very high priority failure.


Even if it doesn't allow you to resolve the ambiguity, knowing that there is an ambiguity is extremely valuable. Say the lidar detects a pedestrian but the camera doesn't. Which one do you believe? Well, you propagate the ambiguity and take appropriate action, i.e. slow down, change lanes, etc. Don't drive through an area where there's a decent chance that you're going to kill someone by doing it.

Yes, absolutely. Knowledge about the fact that a conflict between sensors exists is valuable in its own right, it means you are seeing something that needs more work than simple reinforcement.

Fail safe, always. That's what I tried to get at with 'absence of quorum', it means you are in uncharted territory.


Last time I checked I relied entirely on vision to drive autonomously.

And birds didn't invent jet engines, so obviously we don't need those either, right?

That's a very naive way of looking at this.

You have an extremely detailed world model including a mental model of the drivers and other road users around you. You rely on sight, sound, experience and lots of knowledge. You are aware of the social contracts at work when dealing with shared resources and your brain is many orders of magnitude more powerful than any box full of electronics.

What you can do with 'just vision' misses the fact that you are part of the hardware.


I don’t disagree - what you’re saying is major improvements in AI would be needed to make this work. You are correct.

You rely on a moving camera, microphones and vibrations all together. Driven by a supposedly more advanced meatware than what tech can create today, so that it can properly reason even with faulty/missing signals.

Humans also get into accidents all the time, that's not a great benchmark.

You also have hearing, you can move your head and wear sunglasses to avoid glare, etc.

You have a much better GPU than it has.

Sensor ambiguity is straight up useful as it can allow you to extract signals that neither sensor can fully capture. This is like... basic stuff too, absolutely wild how he's the richest person in the world and considered this absolute genius

Agreed, anyone who has worked on engineering a moderately complex system involving sensing has explored the power of multi domain sensing... without sensor fusion we'd be in the stone ages.

I've been trying to fuse my stone knives and bearskins, but I fear I will never craft a tricorder.

More importantly you can detect a failed sensor.

Truly. I don't understand why Tesla fans think camera/lidar fusion is unsolvable but camera/camera fusion is a non-issue.

Because they bought a Tesla with only cameras on it.

Admitting this would be admitting their Tesla will never be self driving.


I bought mine with cameras and a radar, which they then deprecated and left an unused. Even though autopilot was better when it had radar. Then I realized that this thing would never be self-driving and that its CEO was throwing nazi salutes. Cut my losses and got rid of it. Gotta admit defeat sometimes.

Add a tow hitch to Waymos and any car can be autonomous!

Unsure if you’re trolling, but you haven’t listened to what Tesla are actually saying.

Having more sensors is complicating the matter, but yes sure you can do that if you want to. But just using vision simplifies training a huge amount. The more you think about it, the stronger this argument is. Synthesising data is a lot easier if you’re dealing with one fairly homogenous input.

But the real point is that cameras are cheap, so you can stick them in many many vehicles and gather vast amounts of data for training. This is why Waymo will lose - either to Tesla or more likely a Chinese car manufacturer.

I do not like Elon because I do not think nazi salutes or racism are cool, but I do think Tesla are correct here. Waymo wins for a while, then it dies.


Cameras are only "cheap" because of mobile phone camera development, radar/lidar is going through the same process with car and mobile robotics.

So the "we can train cheaply because of lots of cameras" falls down when, for example, BYD has all of its cars with lidar for ADAS but can collect the data for training as well as the vision from cameras and whatever other sensors like tyre pressures and suspension readings and all the other sensors that are on a modern car.

The argument that we can make the cars cheaper in the future by not collecting the additional data now has been proven wrong by the CN and KR manufacturers.

That's also independent of the whole EV side of things.


Chinese are going with lidars as well.

It's just that the cost of lidars are falling like crazy, with new automotive lidars using phased-array laser optics instead of what waymo started with (mechanically scanned lidars)


Tesla doesn't even use good cameras. Compare to https://waymo.com/blog/2026/02/ro-on-6th-gen-waymo-driver#:~...

That assumes that hardware was/is/will be more expensive than.. simply scaling up data collection and training?

Which seems like a very bad assumption, I'm not even sure it was ever true and is getting less and less true.


The data is key. You need a lot of homogenous data collected at vast scale over places and time, and you need to be able to synthesise data accurately.

Waymo gets limited data from very limited locations, and will have a harder time synthesising data than others.


Do Tesla fans think that? I've seen plenty of Tesla fans say that lidar is unnecessary (which I tend to agree with), but never that lidar is actively detrimental as Musk says there.

I mean, humans have only their eyes. And most of them intentionally distract themselves while driving by listening to music, podcasts, playing with their phones, or eating.

I get your point about camera vs lidar. Humans do have other senses in play while driving though. We have touch/vibration (feeling the road surface texture), hearing, proprioception / acceleration sense, etc. These are all involved for me when I drive a car.

To be fair, humans are fairly poor drivers and generally can't be trusted to drive millions of miles safely.

Humans are not good drivers when it comes to long, monotonous rides (because we get tired)

But (some) humans have the ability to handle difficult situations, and no autonomous system gets anywhere close to that. So this is more of a "robots handle the easy 80% better, but fail hard on the rest of the 20%". Humans have a possibly worse 80% performance, but shine in the 20%.


Actually humans are fairly good drivers. The average US driver goes almost 2 million miles between causing injury collisions. Take the drunks and drug users out and the numbers for humans look even better.

I don't think averages work that way

Incorrect. Humans are fairly good engineers, so cars are pretty safe nowadays.

If you include minor fender-benders and unreported incidents, estimates drop to around 100,000–200,000 miles between any collision event.

This is cataclysmically bad for a designed system, which is why targets are super-human, not human.


Personally as much as people like to dunk on Musk, he did build several successful companies in extremely challenging domains, and he probably listens to the world-leading domain experts in his employ.

So while he might turn out to be wrong, I don't think his opininon is uninformed.


I fully agree with your first point: Musk has shown tremendous ability to manage companies to become unicorns. He's clearly skilled in this domain.

However, if you think about this for 2 seconds with even a rudimentary understanding of sensor fusion, more hardware is always better (ofc with diminishing marginal value).

But ~10y ago, when Tesla was in a financial pinch, Musk decided to scrap as much hardware as possible to save on operational cost and complexity. The argument about "humans can drive with vision only, so self-driving should be able to as well" served as the excuse to shareholders.


> humans can drive with vision only, so self-driving should be able to as well

In May 2016, Tesla Model S driver Joshua Brown died in Williston, Florida, when his vehicle on Autopilot collided with a white tractor-trailer that turned across the highway. The Autopilot system and driver failed to detect the truck's white side against a brightly lit sky, causing the car to pass underneath the trailer.

Our eyes are supported by our brain's AGI which can evaluate the input from our eyes in context. All Tesla had is a camera, and it didn't perform as well as eyes + AGI would have.

When you don't have AGI, additional sensors can provide backup. LiDAR would have saved Joshua Brown's life.


I'm an EE, I have worked with things like sensor fusion professionally. In short sensor fusion depends on what sensors you have and how you combine them, especially if two sensors' outputs tend to disagree - which one is wrong and to what extent, and how a piece of noise gets reflected in each sensors' outputs, to avoid double counting errors and coming up with unjustifyably confident results.

This field is extremely complex, it's often better to pick a sensor and stick with it rather than trying to figure out how to piece together data from very dissimilar sources.


> I'm an EE, I have worked with things like sensor fusion professionally. In short sensor fusion depends on what sensors you have and how you combine them, especially if two sensors' outputs tend to disagree - which one is wrong and to what extent, and how a piece of noise gets reflected in each sensors' outputs, to avoid double counting errors and coming up with unjustifyably confident results.

> This field is extremely complex, it's often better to pick a sensor and stick with it rather than trying to figure out how to piece together data from very dissimilar sources.

Whether sensor fusion makes sense is a highly domain specific question. Guidance like "pick a sensor and stick with it" might have been correct for the projects you've worked on, but there's no reason to think this translates well to other domains.

For what it's worth, sensor fusion is extremely common in SLAM type applications.


> if two sensors' outputs tend to disagree

Use 3 sensors


What doesn’t make sense to me is that the cameras are no where as good as human eyes. The dynamic range sucks, it doesn’t put down a visor or where sunglasses to deal with beaming light, resolution is much worse, etc. why not invest in the cameras themselves if this is your claim?

I always see this argument but from experience I don't buy it. FSD and its cameras work fine driving with the sun directly in front of the car. When driving manually I need the visor so far down I can only see the bottom of the car in front of me.

The cameras on Teslas only really lose visibility when dirty. Especially in winter when there's salt everywhere. Only the very latest models (2025+?) have decent self-cleaning for the cameras that get very dirty.


"works fine" as in can follow a wide asphalt roads' white lines. That is absolutely trivial thing, Lego mind storms could follow a line just fine with a black/white sensor.

This vision clearly doesn't scale to more complex scenarios.


FSD doesn't "work fine" driving directly into the sun. There are loads of YT videos that demonstrate this.

For which car? The older the car (hardware) version the worse it is. I've never had any front camera blinding issues with a 2022 car (HW3).

The thing to remember about cameras is what you see in an image/display is not what the camera sees. Processing the image reduces the dynamic range but FSD could work off of the raw sensor data.


Nobody cares that you think v14.7.22b runs well on HW3.1. Literally nobody.

It doesn't run well on HW3 at all. HW4 has significantly better FSD when running comparable versions (v14). The software has little to do with the front camera getting blinded though.

Especially the part where the cameras do not meet minimum vision requirements [1] in many states where it operates such as California and Texas.

[1] https://news.ycombinator.com/item?id=43605034


And to some extent, I also drive with my ears, not only with 2 eyes. I often can ear a car driving on the blind spot. Not saying that I do need to ear in order to drive, but the extra sensor is welcome when it can helps.

There is an argument for sure, about how many sensors is enough / too much. And maybe 8 cameras around the car is enough to surpass human driving ability.

I guess it depends on how far/secure we want the self-driving to be. If only we had a comprehensive driving test that all (humans and robots) could take and be ranked... each country lawmakers could set the bar based on the test.


The other day I slammed the brakes at a green light, because I could hear sirens approaching -- even though the buildings on the corner prevented any view of the approaching fire trucks or their flashing lights. Do Teslas not have this ability?

I don‘t know whether Tesla‘s self-driving mode could do that.

However, notice that deaf people are allowed to drive, ie. you are not expected to be able to have full hearing to be allowed on the road.


Nuanced point: Even if vision alone were sufficient to drive, adding sensors to the cars today could speed up development. Tesla‘s world model could be improved, speeding up development of the vision only model that is truly autonomous.

Lowest cost per mile will win and Tesla's cyber cab doesn't need expensive suite of sensors. They use lidar in their validation/calibration test cars which is the correct use of lidar. People are already driving USA coast to coast without an SINGLE intervention. It's already over, Tesla has won, Waymo cant compete on cost.

Tesla is using the ill-advised "make it cheap before you make it work" approach.

Been hearing this bullshit for a decade. Any day now…

Meanwhile Waymo is doing half a million rides a week, and Tesla is doing what, a few dozen? Maybe? Maybe zero? Who knows, because they lie and obfuscate about everything. Meanwhile I can go take a Waymo right now in cities all over America.


[flagged]


> lol ok whatever Waymo's says is gospel and everything Tesla says is suspect, EDS in action. Get help.

Given the history on this topic: https://motherfrunker.ca/fsd/

It's not unreasonable to distrust anything Elon says, especially about Tesla/self driving.


Great source, some random guys website that hasn't been updated for 4 years.

> Great source, some random guys website that hasn't been updated for 4 years.

It's literally linking to direct quotes from Elon, can you explain the problem with that?


> I fully agree with your first point: Musk has shown tremendous ability to manage companies to become unicorns. He's clearly skilled in this domain.

I would firmly disagree with that.

What Musk has done is bring money to develop technologies that were generally considered possible, but were being ignored by industry incumbents because they were long-term development projects that would not be profitable for years. When he brings money to good engineers and lets them do their thing, pretty good things happen. The Tesla Roadster, Model S, Falcon 9, Starlink, etc.

The problem with him is he's convinced that he is also a good engineer, and not only that but he's better than anyone that works for him, and that has definitively been proven wrong. The more he takes charge, the worse it gets. The Model X's stupid doors, all the factory insanity, the outdoor paint tent, etc. Model 3 and Model Y arguably succeeded in spite of his interference, but the Dumpstertruck was his baby and we can all see how that has basically only sold to people who want to associate themselves closely with his politics because it's objectively bad at everything else. The constant claims that Tesla cars will drive themselves, the absolute bullshit that is calling it "Full Self Driving", the hilarious claims of humanoid robots being useful, etc. How are those solar roofs coming? Have you heard of anyone installing a Powerwall recently? Heard anything about Roadster 2.0 since he went off claiming it would be able to fly? A bunch of Canadian truckers have built their own hybrid logging trucks from scratch in the time since Tesla started taking money for their semis and we still haven't seen the Tesla trucks haul more than a bunch of bags of chips.

The more Musk is personally involved with a project the worse it is. The man is useful for two things: Providing capital and blatantly lying to hype investors.

If he had stuck to the first one the world as a whole would be a better place, Tesla would probably be in a much better position right now.

SpaceX was for a long time considered to be further from his influence with Shotwell running the company well and Musk acting more as a spokesperson. Starship is sort of his Model X moment and the plans to merge in the AI business will IMO be the Cybertruck.


You say that you disagree with my point, but then your first paragraph just restates my argument. And your subsequent paragraphs don‘t refer to my comment at all.

I never claimed he‘s a good engineer, nor that he has high EQ, nor that he is honest, nor that he has sole responsibility for the success of his companies.


Home batteries are being installed at insane rates in Australia at the moment. Very few of them are Powerwalls because Tesla have priced themselves out of the market (and also Elon’s reputation is toast).

This is all true, but is completely consistent with your parent post's claim that he's good at building unicorn companies.

I think his companies succeeded despite Elon. Tesla should be a $5T company and he fucked it up.

Stongly disagree. I don‘t like the fella but thinking that he founds and successfully manages SpaceX and Tesla to their market value _by chance_ is ridiculous.

His autopilot has killed several people, sometimes the owner of the car, sometimes other drivers sharing the road. It is hard to root for this guy.

> The fact that people still trust him on literally anything boggles my mind.

Long-distance amateur psychology question: I wonder if he's convinced himself that he's a smart guy, after all he's got 12 digits in his net worth, "How would that have been possible if I were an idiot?".

Anyway, ego protection is how people still defend things like the Maga regime, or the genocide; it's hard for someone to admit that they've been stupid enough to have been fooled to vote for "Idi Amin in whiteface" (term coined by Literature Nobel Prize winner Wole Soyinka), or that the "nation's right to self-defense" they've been defending was a thin excuse for mass murder of innocents.


I've always wondered how people who are not 1/10th as smart as Elon convince themselves that he is not intelligent after solving robotics, AI, neuralink, and space all simultaneously.

The guy quite clearly couldn't put a html page up (see his tweets around Twitter's acquisition), and that's a field where he supposedly "worked".

At all the other topics he couldn't even name the field. The only thing he is good at is scamming people dumb enough to fall for this.


And what fraction Elon-Intelligence is needed to believe he actually invented/solved all that by himself?

Or did I miss the sarcasm?


I certainly don't trust anything he says 100%.

This is - to me - entirely separate from the fact that his companies routinely revolutionize industries.


Well, given that Elon openly lies on investor calls...

One of his latest, on the topic of rain/snow/mist/fog and handling with cameras:

"Well, we have made that a non-issue as we actually do photon counting in the cameras, which solves that problem."

No, Elon, you don't. For two reasons: reason one, part A, the types of cameras that do photon counting don't work well for normal 'vision'/imagery associated with cameras, and part B, are not actually present in your cars at all. And reason two, photon counting requires the camera being in an enclosed space to work, which cars on the road ... aren't.

What Elon has mastered the art of is making statements that sound informed, pass the BS detector of laypeople, and optionally are also plausibly deniable if actually called out by an SME.


If only there was a filter so we could fuse different sensor measurements into a better whole..

I don't thing it's purely stubbornness. Tesla sold the promise of software only updates resulting in FSD to hundreds of thousands of people. Not all of those people are in the cult of Tesla. I would expect admitting defeat at this point would result in a large class action lawsuit at the very least.

It wouldn't keep them from equipping _new_ models with additional sensors, spinning a story around how this helps them train the camera-only AI, or whatever.

It’s vaporware and it’s dollars and cents. Tesla EVs are already too expensive. He has no margin to include thousands more on sensors, alternative being the lawsuits that would follow if he admits it was all vaporware.

I know it's "illegal" and technically sold as FSD (assisted), but just 2 days ago I was in a friend's Model Y and it drove from work to my house (both in San Jose) without any steering wheel or pedal touch, at all. And he told me he went to Palm Springs like that too.

I shit on Tesla and Elon on any opportunity, and it's a shame they basically have the software out there doing things when it probably shouldn't, but I don't think they're that far behind Waymo where it really matters, which is the thing actually working.


I suspect they have a long tail problem with FSD. It might work fine 99% but that's simply not good enough.

Nice illusion of competence on easy conditions…Until it hits a person and Tesla EV performs far worse than the Waymo both during th crash and afterwards PR wise. Guarantee you Elon will throw the driver under the bus for not watching, not his sketchy system.

Palm Springs from San Jose? Albeit freeway throughout but that's quite impressive.

The terms of service probably require you to sue Tesla in that Texas district with his corrupt judge pal.

Elon cult members still to this day will tell me that because humans only use vision to drive all a Tesla needs is simple cameras. Meanwhile, I've been driven by Waymo and Tesla FSD and Waymo is by far my pick for safety and comfort. I actually trusted the waymo I was in, while the Tesla I rode in we had 2 _very_ scary incidents at high speeds in a 1 hour drive.

> humans only use vision to drive

I love this argument because it is so obviously wrong: how could any self aware person seriously argue that hearing, touch, and the inner ear aren't involved in their driving?

As an adult I can actually afford a reliable car, so I will concede that smell is less relevant than it used to be, at least for me personally :)


> hearing, touch, and the inner ear aren't involved

Not to mention possibly the most complex structure in the known universe, the human brain: 86 billion neurons, 100 trillion connections.


Involved? Yes. Necessary? Pretty sure no.

If it makes you happy, you can read "only vision" as "no lidar or radar." Cars already have microphones and IMUs.


1. in US you can get a driver's license if you're deaf so as a society we think you can drive without hearing

2. since this is in context of Tesla: tesla cars do have microphones and FSD does use it for responding to sirens etc.


(1) is true, but actually driving is definitely harder without hearing or with diminished hearing. And Several US states, including CA, prohibit inhibiting hearing while driving, e.g., by wearing a headset, earbuds, or earplugs.

Human inner ear is worse than a $3 IMU in your average smartphone in literally every way. And that IMU also has a magnetometer in it.

Beating human sensors wasn't hard for over a decade now. The problem is that sensors are worthless. Self-driving lives and dies by AI - all the sensors need to be is "good enough".


Human hearing is excellent. Good directional perception and sensitivity. Eyesight is the weakest sense. Poor color sensitivity, low light sensitivity, blindspot. The terrible natural design flaws are compensated by natural nystagmas and the brain filling in the blanks.

> The problem is that sensors are worthless

Well, in TFA the far more successful manufacturer of self driving cars is saying you're wrong. I think they're in much better position to know than you :)


As an outsider I assumed it took GM a substantial investment just to realize how far out of their depth they were. It made sense to cut their losses once they figured this out.

Having experience and capability to manufacturer cars has approximately zero benefit to create a self-driving software/sensor stack. It would make more sense for Adobe to create a self-driving car than GM.


Cruise was being operated as a separate company though. As a default, GM could have just not done anything and let Cruise operate as if it were independent. Any synergies (personnel, manufacturing expertise, etc) would have just been a bonus. And if they didn't want the financial exposure, they could have spun it out again.

Instead they chopped it up for spare parts, specifically, sending some Cruise personnel to work on deadend GM driver assistance tech and firing the rest. Baffling.


Reputational risk to GM from the cavalier/shameful way Cruise/Kyle Vogt operated. Tried to hide the fact they dragged a person.

I remember GM cars in Herzliya, Israel with cables and cameras held by duct tape circa 2019 after Andrej Karpathy already presented end to end neural network training for Autopilot in Tesla. Looked like very late to the party.

Cruise was always run as a separate business from GM until they shut it down. I think they got too nervous about committing to the Silicon Valley investment style: high capital, high risk, long time horizon, high reward.

Tariffs are import taxes. The pre-Maga republican party used to be against taxes.

There's no hypocrisy because deep down they never had any principles. Principles they did profess were just marketing. (This applies to Dems as well, while we're at it.)

> This applies to Dems as well, while we're at it.

I disagree with this both-sideism. Democrats are much more in following with norms, where MAGA-era FKA-republicans will through anything aside for their benefit (e.g. Merrick Garland).


Assuming the Dems can get power again we need them to aggressively pursue leftist economic populism. As you can see from the present moment, once principles become inconvenient, they abandon them. So, yes, “both sides”. Being clear-eyed about this can save our democracy.

Wild speculation:

My hot take is that we’re in the midst of a political realignment. The Democratic Party will be the new center-right “conservative” party, and progressives will be their primary opponents. Assuming MAGA has a dumpster fire collapse when the AI bubble pops and we have the worst market contraction and affordability crisis in decades.


You don’t think a progressive takeover (a lefty Tea Party, if you will) of the Dems is more likely?

I hope progressives can keep up momentum. I worry that if things go back to normal a lot of people who are doing fine (economically) will go back to sleep.


Continued speculation: 0% shot of a progressive takeover given the complete lack of moneyed interest, and the pushback from the core Democrats any time any progressive sniffs national success.

If MAGA fails as a political movement it’ll leave a rightwing power vacuum and the Democratic Party will fill it. The progressives will split and you’ll see center right Democrats going up against left wing Progressives in the north. There will still be some vestigial Republican Party in the south that will be based mostly on anti-woke rhetoric, but won’t have any appetite for the policies that MAGA is famous for after they caused an economic collapse, and will mostly vote alongside the new northern democrats.


I think this is a misunderstanding the party used to be against taxes for wealthy people and corporations, they never cared about taxes on consumers.

Pre-maga republicans also used to be pro open borders! A lot has realigned in the past decade or so.

They also wouldn't have tried using a holstered weapon as pretense for a public execution which the president doubled down on, until he didn't. They wouldn't be treading on state's rights so openly either. We might be seeing parties flipping, in very short order.

If the Dems pick up on some of the issues the Republicans are neglecting, while maintaining principles* about healthcare access and reproductive rights I expect they'd be the dominant political force in America for some time...if they just had somebody who could man the helm.

* Hah. What principles?


calling everything that isn't hypermilitarized border control "open borders" is getting old

One way to view the history of the Republican Party is a power struggle between Wall Street and regional/small business owners. Wall Street understands that the U.S. consumer economy depends on international trade to provide cheap, abundant goods and so supports free trade, immigration of skilled workers, and foreign aid/interventions to further U.S. business interests. For them, the culture war and nationalist rhetoric is a way to get Republican voters riled up but they don't really believe any of it.

The regional/small business owners are always threatened by competition from larger international firms and benefit less from international trade. They believe in the nationalist rhetoric and are opposed to free trade because it undercuts their businesses with cheaper products. They think the U.S. can remain the world's superpower without running a trade deficit and doesn't need to build alliances to maintain its power. This is Trump's base, and their misunderstanding of U.S. power is why they love the idea of tariffs. (good for local producers!) They want to get all the benefits of being a superpower without any of the costs.


Taxing and spending is so much fun even the Republicans can't resist the temptation

They're more Tax Cut and Spend which is infinitely worse.

They are only spending the money on domestic policing and pocketing the rest, most recently in a Qatari account...

Similarly, the people loudly protesting tariffs are traditionally for higher taxes.

Those people are generally for higher taxes on the rich. Tariff are a flat tax that dis-proportionally affects the poor.

Every tax ever implemented by government has been initially sold as a tax on the rich. The people voting for it assume they will never be taxed because they aren't currently rich. But, there is never enough of other peoples money to spend. So, taxes expand and/or increase to include more people.

The original income tax was sold as 1% on mid income and 2% on high income. At the time more than half the country was not going to pay any tax.


Maybe because the rich consistently lobby and weasel their way out of paying their fair share so the burden on everyone else continues to rise?

Or, no amount of money is ever enough for government and taxing income is a dumb value to collect revenue.

Southern states supported an income tax because they believed it would make it possible to collect revenue for indigent people. Exactly opposite of "taxes the rich".


More than half the country still doesn't pay any (income) tax.

That is a bit of a lie. Getting a refund does not mean you didn't pay. Even getting more back than you actually paid does not mean you didn't pay. Money is taken from every paycheck. When you file your taxes the IRS decides what you get back.

Which is why the current administration and their (very rich) backers prefer tariffs to income taxes...

I believe these loud people are mostly for taxes on wealth vs. direct taxes on consumption, as the latter affect lower classes more acutely.

The proven recipe for success is buy more, earn less

Well, typically for higher progressive taxes. Tariffs are typically a regressive tax.

> higher taxes

...for the wealthy. Tariffs are use taxes and overwhelmingly affect the 99%.


For the wealthy, or for high earners? I have never seen a proposal for an income tax that grades with your net worth. They only grade with your income.

> overstates both the competence of spy agencies

Stuxnet was pretty impressive: https://en.wikipedia.org/wiki/Stuxnet


It was also not a bug to be exploited.

It was a complicated product that many people worked in order to develop and took advantage of many pre-existing vulnerabilities as well knowledge of complex and niche systems in order to work.


Yeah, Stuxnet was the absolute worst of the worst the depths of its development we will likely truly never know. The cost of its development we will never truly know. It was an extremely highly, hyper targeted, advanced digital weapon. Nation states wouldn't even use this type of warfare against pedophiles.

Stuxnet was discovered because a bug was accidently introduced during an update [0]. So I think it speaks more to how vulnerabilities and bugs do appear organically. If an insanely sophisticated program built under incredibly high security and secrecy standards can accidently push an update introducing a bug, then why wouldn't it happen to Apple?

[0] https://repefs.wordpress.com/2025/04/09/a-comprehensive-anal...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: