Cerebras is a bit of a stunt like "datacenters in spaaaaace".
Terrible yield: one defect can ruin a whole wafer instead of just a chip region. Poor perf./cost (see above). Difficult to program. Little space for RAM.
Their models might be impressive, but their products absolutely suck donkey balls. I’ve given Gemini web/cli two months and ran away back to ChatGPT. Seriously, it would just COMPLETELY forget context mid dialog. When asked about improving air quality it just gave me a list of (mediocre) air purifiers without asking for any context whatsoever, and I can list thousands of conversations like that. Shopping or comparing options is just nonexistent.
It uses Russian propaganda sources for answers and switches to Chinese mid sentence (!), while explaining some generic Python functionality.
It’s an embarrassment and I don’t know how they justify 20 euro price tag on it.
I agree. On top of that, in true Google style, basic things just don't work.
Any time I upload an attachment, it just fails with something vague like "couldn't process file". Whether that's a simple .MD or .txt with less than 100 lines or a PDF. I tried making a gem today. It just wouldn't let me save it, with some vague error too.
I also tried having it read and write stuff to "my stuff" and Google drive. But it would consistently write but not be able to read from it again. Or would read one file from Google drive and ignore everything else.
Their models are seriously impressive. But as usual Google sucks at making them work well in real products.
I don't find that at all. At work, we've no access to the API, so we have to force feed a dozen (or more) documents, code and instruction prompts through the web interface upload interface. The only failures I've ever had in well over 300 sessions were due to connectivity issues, not interface failures.
Context window blowouts? All the time, but never document upload failures.
I'm talking about Gemini in the app and on the web. As well as AI studio. At work we go through Copilot, but there the agentic mode with Gemini isn't the best either.
I've used their Pro models very successfully in demanding API workloads (classification, extraction, synthesis). On benchmarks it crushed the GPT-5 family. Gemini is my default right now for all API work.
It took me however a week to ditch Gemini 3 as a user. The hallucinations were off the charts compared to GPT-5. I've never even bothered with their CLI offering.
It’s all context/ use case; I’ve had weird things too but if you only use markdown inputs and specific prompts Gemini 3 Pro is insane, not to mention the context window
Also because of the long context window (1 mil tokens on thinking and pro! Claude and OpenAI only have 128k) deep research is the best
That being said, for coding I definitely still use Codex with GPT 5.3 XHigh lol
My experience with Antigravity is the opposite. It's the first time in over 10 years that an IDE has managed to take me out a bit out of the jetbrain suite. I did not think that was something possible as I am a hardcore jetbrain user/lover.
How can the models be impressive if they switch to Chinese mid-sentence? I've observed those bizarre bugs too. Even GPT-3 didn't have those. Maybe GPT-2 did. It's actually impressive that they managed to botch it so badly.
Google is great at some things, but this isn't it.
It's so capable at some things, and others are garbage.
I uploaded a photo of some words for a spelling bee and asked it to quiz my kid on the words. The first word it asked, wasn't on the list. After multiple attempts to get it to start asking only the words in the uploaded pic, it did, and then would get the spellings wrong in the Q&A. I gave up.
I had it process a photo of my D&D character sheet and help me debug it as I'm a n00b at the game. Also did a decent, although not perfect, job of adding up a handwritten bowling score sheet.
I don't have any of these issues with Gemini. I use it heavily everyday. A few glitches here and there, but it's been enormously productive for me. Far more so then chatgpt, which I find mostly useless.
Agreed on the product. I can't make Gemini read my emails on GMail. One day it says it doesn't have access, the other day it says Query unsuccessful.
Claude Desktop has no problem reaching to GMail, on the other hand :)
And it gives incorrect answers about itself and google’s services all the time. It kept pointing me to nonexistent ui elements. At least it apologizes profusely! ffs
Not a single person is using it for coding (outside of Google itself).
Maybe some people on a very generous free plan.
Their model is a fine mid 2025 model, backed by enormous compute resources and an army of GDM engineers to help the “researchers” keep the model on task as it traverses the “tree of thoughts”.
But that isn’t “the model” that’s an old model backed by massive money.
Market counter points that aren't really just a repackaging of:
1. "Google has the world's best distribution" and/or
2. "Google has a firehose of money that allows them to sell their 'AI product' at an enormous discount?
These benchmarks are super impressive. That said, Gemini 3 Pro benchmarked well on coding tasks, and yet I found it abysmal. A distant third behind Codex and Claude.
Tool calling failures, hallucinations, bad code output. It felt like using a coding model from a year ago.
Even just as a general use model, somehow ChatGPT has a smoother integration with web search (than google!!), knowing when to use it, and not needing me to prompt it directly multiple times to search.
Not sure what happened there. They have all the ingredients in theory but they've really fallen behind on actual usability.
Just not search. The search product has pretty much become useless over the past 3 years and the AI answers often will get just to the level of 5 years ago. This creates a sense that that things are better - but really it’s just become impossible to get reliable information from an avenue that used to work very well.
I don’t think this is intentional, but I think they stopped fighting SEO entirely to focus on AI. Recipes are the best example - completely gutted and almost all receive sites (therefore the entire search page) run by the same company. I didn’t realize how utterly consolidated huge portions of information on the internet was until every recipe site about 3 months ago simultaneously implemented the same anti-Adblock.
Competition always is. I think there was a real fear that their core product was going to be replaced. They're already cannibalizing it internally so it was THE wake up call.
Wartime Google gave us Google+. Wartime Google is still bumbling, and despite OpenAI's numerous missteps, I don't think it has to worry about Google hurting its business yet.
I do miss Google+. For my brain / use case, it was by far the best social network out there, and the Circle friends and interest management system is still unparalleled :)
Windows Phone was actually good. I would even say that my Lumia something was one of best experiences ever on mobile. G+ was also good. Efficient markets mean that you can "extract" rent, via selling data or attention etc. not realy what is good
But wait two hours for what OpenAI has! I love the competition and how someone just a few days ago was telling how ARC-AGI-2 was proof that LLMs can't reason. The goalposts will shift again. I feel like most of human endeavor will soon be just about trying to continuously show that AI's don't have AGI.
> I feel like most of human endeavor will soon be just about trying to continuously show that AI's don't have AGI.
I think you overestimate how much your average person-on-the-street cares about LLM benchmarks. They already treat ChatGPT or whichever as generally intelligent (including to their own detriment), are frustrated about their social media feeds filling up with slop and, maybe, if they're white-collar, worry about their jobs disappearing due to AI. Apart from a tiny minority in some specific field, people already know themselves to be less intelligent along any measurable axis than someone somewhere.
"AGI" doesn't mean anything concrete, so it's all a bunch of non-sequiturs. Your goalposts don't exist.
Anyone with any sense is interested in how well these tools work and how they can be harnessed, not some imaginary milestone that is not defined and cannot be measured.
I agree. I think the emergence of LLMs have shown that AGI really has no teeth. I think for decades the Turing test was viewed as the gold standard, but it's clear that there doesn't appear to be any good metric.
The turing test was passed in the 80s, somehow it has remained relevant in pop culture despite the fact that it's not a particularly difficult technical achievement
It's very hard to tell the difference between bad models and stinginess with compute.
I subscribe to both Gemini ($20/mo) and ChatGPT Pro ($200/mo).
If I give the same question to "Gemini 3.0 Pro" and "ChatGPT 5.2 Thinking + Heavy thinking", the latter is 4x slower and it gives smarter answers.
I shouldn't have to enumerate all the different plausible explanations for this observation. Anything from Gemini deciding to nerf the reasoning effort to save compute, versus TPUs being faster, to Gemini being worse, to this being my idiosyncratic experience, all fit the same data, and are all plausible.
You nailed it. Gemini 3 Pro seems very "lazy" and seems to never reason for more than 30 seconds, which significantly impacts the quality of its outputs.
Agree. Anyone with access to large proprietary data has an edge in their space (not necessarily for foundation models): Salesforce, adobe, AutoCAD, caterpillar
Gemini's UX (and of course privacy cred as with anything Google) is the worst of all the AI apps. In the eyes of the Common Man, it's UI that will win out, and ChatGPT's is still the best.
They don't even let you have multiple chats if you disable their "App Activity" or whatever (wtf is with that ass naming? they don't even have a "Privacy" section in their settings the last time I checked)
and when I swap back into the Gemini app on my iPhone after a minute or so the chat disappears. and other weird passive-aggressive take-my-toys-away behavior if you don't bare your body and soul to Googlezebub.
ChatGPT and Grok work so much better without accounts or with high privacy settings.
Been using Gemini + OpenCode for the past couple weeks.
Suddenly, I get a "you need a Gemini Access Code license" error but when you go to the project page there is no mention of this or how to get the license.
You really feel the "We're the phone company and we don't care. Why? Because we don't have to." [0] when you use these Google products.
PS for those that don't get the reference: US phone companies in the 1970s had a monopoly on local and long distance phone service. Similar to Google for search/ads (really a "near" monopoly but close enough).
You mean AI Studio or something like that, right? Because I can't see a problem with Google's standard chat interface. All other AI offerings are confusing both regarding their intended use and their UX, though, I have to concur with that.
No projects, completely forgets context mid dialog, mediocre responses even on thinking, research got kneecapped somehow and is completely uses now, uses propaganda Russian videos as the search material (what’s wrong with you, Google?), janky on mobile, consumes GIGABYTES of RAM on web (seriously, what the fuck?). Left a couple of tabs over night, Mac is almost complete frozen because 10 tabs consumed 8 GBs of RAM doing nothing. It’s a complete joke.
Fair enough. I'm always astonished how different experiences are because mine is the complete opposite. I almost solely use it for help with Go and Javascript programming and found Gemini Pro to be more useful than any other model. ChatGPT was the worst offender so far, completely useless, but Claude has also been suboptimal for my use cases.
I guess it depends a lot on what you use LLMs for and how they are prompted. For example, Gemini fails the simple "count from 1 to 200 in words" test whereas Claude does it without further questions.
Another possible explanation would be that processing time is distributed unevenly across the globe and companies stay silent about this. Maybe depending on time zones?
I'm leery to use a Google product in light of their history of discontinuing services. It'd have to be significantly better than a similar product from a committed competitor.
Trick? Lol not a chance. Alphabet is a pure play tech firm that has to produce products to make the tech accessible. They really lack in the latter and this is visible when you see the interactions of their VP's. Luckily for them, if you start to create enough of a lead with the tech, you get many chances to sort out the product stuff.
Don't let the benchmarks fool you. Gemini models are completely useless not matter how smart they are. Google still hasn't figure out tool calling and making the model follow instructions. They seem to only care about benchmarking and being the most intelligent model on paper. This has been a problem of Gemini since 1.0 and they still haven't fixed it.
According to Elon, "sensor ambiguity" is a danger to the process [1], and therefore only a single type of sensor is allowed. (Conveniently ignores that there can be ambiguity/disagreement between two instances of the same type of sensor)
The fact that people still trust him on literally anything boggles my mind.
Sensor fusion allows you to resolve that ambiguity, I wonder if Elon is really as in touch with this as you would expect. No single sensor is perfect, they all have their problematic areas and a good sensor fusion scheme allows you to have your sensors reinforce each other in such a way that each operates as close as possible to their area of strength.
No single sensor can ever give you that kind of resilience. Sure, it is easy in that you never have ambiguity, but that means that when you're wrong there is also nothing to catch you to indicate something might be up.
This goes for any system where you have such a limited set of inputs that you never reach quorum the basic idea is to have enough sensors that you always have quorum, and to treat the absence of quorum as a very high priority failure.
Even if it doesn't allow you to resolve the ambiguity, knowing that there is an ambiguity is extremely valuable. Say the lidar detects a pedestrian but the camera doesn't. Which one do you believe? Well, you propagate the ambiguity and take appropriate action, i.e. slow down, change lanes, etc. Don't drive through an area where there's a decent chance that you're going to kill someone by doing it.
Yes, absolutely. Knowledge about the fact that a conflict between sensors exists is valuable in its own right, it means you are seeing something that needs more work than simple reinforcement.
Fail safe, always. That's what I tried to get at with 'absence of quorum', it means you are in uncharted territory.
You have an extremely detailed world model including a mental model of the drivers and other road users around you. You rely on sight, sound, experience and lots of knowledge. You are aware of the social contracts at work when dealing with shared resources and your brain is many orders of magnitude more powerful than any box full of electronics.
What you can do with 'just vision' misses the fact that you are part of the hardware.
You rely on a moving camera, microphones and vibrations all together. Driven by a supposedly more advanced meatware than what tech can create today, so that it can properly reason even with faulty/missing signals.
Sensor ambiguity is straight up useful as it can allow you to extract signals that neither sensor can fully capture. This is like... basic stuff too, absolutely wild how he's the richest person in the world and considered this absolute genius
Agreed, anyone who has worked on engineering a moderately complex system involving sensing has explored the power of multi domain sensing... without sensor fusion we'd be in the stone ages.
I bought mine with cameras and a radar, which they then deprecated and left an unused. Even though autopilot was better when it had radar. Then I realized that this thing would never be self-driving and that its CEO was throwing nazi salutes. Cut my losses and got rid of it. Gotta admit defeat sometimes.
Unsure if you’re trolling, but you haven’t listened to what Tesla are actually saying.
Having more sensors is complicating the matter, but yes sure you can do that if you want to. But just using vision simplifies training a huge amount. The more you think about it, the stronger this argument is. Synthesising data is a lot easier if you’re dealing with one fairly homogenous input.
But the real point is that cameras are cheap, so you can stick them in many many vehicles and gather vast amounts of data for training. This is why Waymo will lose - either to Tesla or more likely a Chinese car manufacturer.
I do not like Elon because I do not think nazi salutes or racism are cool, but I do think Tesla are correct here. Waymo wins for a while, then it dies.
Cameras are only "cheap" because of mobile phone camera development, radar/lidar is going through the same process with car and mobile robotics.
So the "we can train cheaply because of lots of cameras" falls down when, for example, BYD has all of its cars with lidar for ADAS but can collect the data for training as well as the vision from cameras and whatever other sensors like tyre pressures and suspension readings and all the other sensors that are on a modern car.
The argument that we can make the cars cheaper in the future by not collecting the additional data now has been proven wrong by the CN and KR manufacturers.
That's also independent of the whole EV side of things.
It's just that the cost of lidars are falling like crazy, with new automotive lidars using phased-array laser optics instead of what waymo started with (mechanically scanned lidars)
The data is key. You need a lot of homogenous data collected at vast scale over places and time, and you need to be able to synthesise data accurately.
Waymo gets limited data from very limited locations, and will have a harder time synthesising data than others.
Do Tesla fans think that? I've seen plenty of Tesla fans say that lidar is unnecessary (which I tend to agree with), but never that lidar is actively detrimental as Musk says there.
I mean, humans have only their eyes. And most of them intentionally distract themselves while driving by listening to music, podcasts, playing with their phones, or eating.
I get your point about camera vs lidar. Humans do have other senses in play while driving though. We have touch/vibration (feeling the road surface texture), hearing, proprioception / acceleration sense, etc. These are all involved for me when I drive a car.
Humans are not good drivers when it comes to long, monotonous rides (because we get tired)
But (some) humans have the ability to handle difficult situations, and no autonomous system gets anywhere close to that. So this is more of a "robots handle the easy 80% better, but fail hard on the rest of the 20%". Humans have a possibly worse 80% performance, but shine in the 20%.
Actually humans are fairly good drivers. The average US driver goes almost 2 million miles between causing injury collisions. Take the drunks and drug users out and the numbers for humans look even better.
Personally as much as people like to dunk on Musk, he did build several successful companies in extremely challenging domains, and he probably listens to the world-leading domain experts in his employ.
So while he might turn out to be wrong, I don't think his opininon is uninformed.
I fully agree with your first point: Musk has shown tremendous ability to manage companies to become unicorns. He's clearly skilled in this domain.
However, if you think about this for 2 seconds with even a rudimentary understanding of sensor fusion, more hardware is always better (ofc with diminishing marginal value).
But ~10y ago, when Tesla was in a financial pinch, Musk decided to scrap as much hardware as possible to save on operational cost and complexity. The argument about "humans can drive with vision only, so self-driving should be able to as well" served as the excuse to shareholders.
> humans can drive with vision only, so self-driving should be able to as well
In May 2016, Tesla Model S driver Joshua Brown died in Williston, Florida, when his vehicle on Autopilot collided with a white tractor-trailer that turned across the highway. The Autopilot system and driver failed to detect the truck's white side against a brightly lit sky, causing the car to pass underneath the trailer.
Our eyes are supported by our brain's AGI which can evaluate the input from our eyes in context. All Tesla had is a camera, and it didn't perform as well as eyes + AGI would have.
When you don't have AGI, additional sensors can provide backup. LiDAR would have saved Joshua Brown's life.
I'm an EE, I have worked with things like sensor fusion professionally. In short sensor fusion depends on what sensors you have and how you combine them, especially if two sensors' outputs tend to disagree - which one is wrong and to what extent, and how a piece of noise gets reflected in each sensors' outputs, to avoid double counting errors and coming up with unjustifyably confident results.
This field is extremely complex, it's often better to pick a sensor and stick with it rather than trying to figure out how to piece together data from very dissimilar sources.
> I'm an EE, I have worked with things like sensor fusion professionally. In short sensor fusion depends on what sensors you have and how you combine them, especially if two sensors' outputs tend to disagree - which one is wrong and to what extent, and how a piece of noise gets reflected in each sensors' outputs, to avoid double counting errors and coming up with unjustifyably confident results.
> This field is extremely complex, it's often better to pick a sensor and stick with it rather than trying to figure out how to piece together data from very dissimilar sources.
Whether sensor fusion makes sense is a highly domain specific question. Guidance like "pick a sensor and stick with it" might have been correct for the projects you've worked on, but there's no reason to think this translates well to other domains.
For what it's worth, sensor fusion is extremely common in SLAM type applications.
What doesn’t make sense to me is that the cameras are no where as good as human eyes. The dynamic range sucks, it doesn’t put down a visor or where sunglasses to deal with beaming light, resolution is much worse, etc. why not invest in the cameras themselves if this is your claim?
I always see this argument but from experience I don't buy it. FSD and its cameras work fine driving with the sun directly in front of the car. When driving manually I need the visor so far down I can only see the bottom of the car in front of me.
The cameras on Teslas only really lose visibility when dirty. Especially in winter when there's salt everywhere. Only the very latest models (2025+?) have decent self-cleaning for the cameras that get very dirty.
"works fine" as in can follow a wide asphalt roads' white lines. That is absolutely trivial thing, Lego mind storms could follow a line just fine with a black/white sensor.
This vision clearly doesn't scale to more complex scenarios.
For which car? The older the car (hardware) version the worse it is. I've never had any front camera blinding issues with a 2022 car (HW3).
The thing to remember about cameras is what you see in an image/display is not what the camera sees. Processing the image reduces the dynamic range but FSD could work off of the raw sensor data.
It doesn't run well on HW3 at all. HW4 has significantly better FSD when running comparable versions (v14). The software has little to do with the front camera getting blinded though.
And to some extent, I also drive with my ears, not only with 2 eyes. I often can ear a car driving on the blind spot. Not saying that I do need to ear in order to drive, but the extra sensor is welcome when it can helps.
There is an argument for sure, about how many sensors is enough / too much. And maybe 8 cameras around the car is enough to surpass human driving ability.
I guess it depends on how far/secure we want the self-driving to be. If only we had a comprehensive driving test that all (humans and robots) could take and be ranked... each country lawmakers could set the bar based on the test.
The other day I slammed the brakes at a green light, because I could hear sirens approaching -- even though the buildings on the corner prevented any view of the approaching fire trucks or their flashing lights. Do Teslas not have this ability?
Nuanced point: Even if vision alone were sufficient to drive, adding sensors to the cars today could speed up development. Tesla‘s world model could be improved, speeding up development of the vision only model that is truly autonomous.
Lowest cost per mile will win and Tesla's cyber cab doesn't need expensive suite of sensors. They use lidar in their validation/calibration test cars which is the correct use of lidar. People are already driving USA coast to coast without an SINGLE intervention. It's already over, Tesla has won, Waymo cant compete on cost.
Been hearing this bullshit for a decade. Any day now…
Meanwhile Waymo is doing half a million rides a week, and Tesla is doing what, a few dozen? Maybe? Maybe zero? Who knows, because they lie and obfuscate about everything. Meanwhile I can go take a Waymo right now in cities all over America.
> I fully agree with your first point: Musk has shown tremendous ability to manage companies to become unicorns. He's clearly skilled in this domain.
I would firmly disagree with that.
What Musk has done is bring money to develop technologies that were generally considered possible, but were being ignored by industry incumbents because they were long-term development projects that would not be profitable for years. When he brings money to good engineers and lets them do their thing, pretty good things happen. The Tesla Roadster, Model S, Falcon 9, Starlink, etc.
The problem with him is he's convinced that he is also a good engineer, and not only that but he's better than anyone that works for him, and that has definitively been proven wrong. The more he takes charge, the worse it gets. The Model X's stupid doors, all the factory insanity, the outdoor paint tent, etc. Model 3 and Model Y arguably succeeded in spite of his interference, but the Dumpstertruck was his baby and we can all see how that has basically only sold to people who want to associate themselves closely with his politics because it's objectively bad at everything else. The constant claims that Tesla cars will drive themselves, the absolute bullshit that is calling it "Full Self Driving", the hilarious claims of humanoid robots being useful, etc. How are those solar roofs coming? Have you heard of anyone installing a Powerwall recently? Heard anything about Roadster 2.0 since he went off claiming it would be able to fly? A bunch of Canadian truckers have built their own hybrid logging trucks from scratch in the time since Tesla started taking money for their semis and we still haven't seen the Tesla trucks haul more than a bunch of bags of chips.
The more Musk is personally involved with a project the worse it is. The man is useful for two things: Providing capital and blatantly lying to hype investors.
If he had stuck to the first one the world as a whole would be a better place, Tesla would probably be in a much better position right now.
SpaceX was for a long time considered to be further from his influence with Shotwell running the company well and Musk acting more as a spokesperson. Starship is sort of his Model X moment and the plans to merge in the AI business will IMO be the Cybertruck.
You say that you disagree with my point, but then your first paragraph just restates my argument. And your subsequent paragraphs don‘t refer to my comment at all.
I never claimed he‘s a good engineer, nor that he has high EQ, nor that he is honest, nor that he has sole responsibility for the success of his companies.
Home batteries are being installed at insane rates in Australia at the moment. Very few of them are Powerwalls because Tesla have priced
themselves out of the market (and also Elon’s reputation is toast).
Stongly disagree. I don‘t like the fella but thinking that he founds and successfully manages SpaceX and Tesla to their market value _by chance_ is ridiculous.
> The fact that people still trust him on literally anything boggles my mind.
Long-distance amateur psychology question: I wonder if he's convinced himself that he's a smart guy, after all he's got 12 digits in his net worth, "How would that have been possible if I were an idiot?".
Anyway, ego protection is how people still defend things like the Maga regime, or the genocide; it's hard for someone to admit that they've been stupid enough to have been fooled to vote for "Idi Amin in whiteface" (term coined by Literature Nobel Prize winner Wole Soyinka), or that the "nation's right to self-defense" they've been defending was a thin excuse for mass murder of innocents.
I've always wondered how people who are not 1/10th as smart as Elon convince themselves that he is not intelligent after solving robotics, AI, neuralink, and space all simultaneously.
Well, given that Elon openly lies on investor calls...
One of his latest, on the topic of rain/snow/mist/fog and handling with cameras:
"Well, we have made that a non-issue as we actually do photon counting in the cameras, which solves that problem."
No, Elon, you don't. For two reasons: reason one, part A, the types of cameras that do photon counting don't work well for normal 'vision'/imagery associated with cameras, and part B, are not actually present in your cars at all. And reason two, photon counting requires the camera being in an enclosed space to work, which cars on the road ... aren't.
What Elon has mastered the art of is making statements that sound informed, pass the BS detector of laypeople, and optionally are also plausibly deniable if actually called out by an SME.
I don't thing it's purely stubbornness. Tesla sold the promise of software only updates resulting in FSD to hundreds of thousands of people. Not all of those people are in the cult of Tesla. I would expect admitting defeat at this point would result in a large class action lawsuit at the very least.
It wouldn't keep them from equipping _new_ models with additional sensors, spinning a story around how this helps them train the camera-only AI, or whatever.
It’s vaporware and it’s dollars and cents. Tesla EVs are already too expensive. He has no margin to include thousands more on sensors, alternative being the lawsuits that would follow if he admits it was all vaporware.
I know it's "illegal" and technically sold as FSD (assisted), but just 2 days ago I was in a friend's Model Y and it drove from work to my house (both in San Jose) without any steering wheel or pedal touch, at all. And he told me he went to Palm Springs like that too.
I shit on Tesla and Elon on any opportunity, and it's a shame they basically have the software out there doing things when it probably shouldn't, but I don't think they're that far behind Waymo where it really matters, which is the thing actually working.
Nice illusion of competence on easy conditions…Until it hits a person and Tesla EV performs far worse than the Waymo both during th crash and afterwards PR wise. Guarantee you Elon will throw the driver under the bus for not watching, not his sketchy system.
Elon cult members still to this day will tell me that because humans only use vision to drive all a Tesla needs is simple cameras. Meanwhile, I've been driven by Waymo and Tesla FSD and Waymo is by far my pick for safety and comfort. I actually trusted the waymo I was in, while the Tesla I rode in we had 2 _very_ scary incidents at high speeds in a 1 hour drive.
I love this argument because it is so obviously wrong: how could any self aware person seriously argue that hearing, touch, and the inner ear aren't involved in their driving?
As an adult I can actually afford a reliable car, so I will concede that smell is less relevant than it used to be, at least for me personally :)
(1) is true, but actually driving is definitely harder without hearing or with diminished hearing. And Several US states, including CA, prohibit inhibiting hearing while driving, e.g., by wearing a headset, earbuds, or earplugs.
Human inner ear is worse than a $3 IMU in your average smartphone in literally every way. And that IMU also has a magnetometer in it.
Beating human sensors wasn't hard for over a decade now. The problem is that sensors are worthless. Self-driving lives and dies by AI - all the sensors need to be is "good enough".
Human hearing is excellent. Good directional perception and sensitivity.
Eyesight is the weakest sense. Poor color sensitivity, low light sensitivity, blindspot. The terrible natural design flaws are compensated by natural nystagmas and the brain filling in the blanks.
Well, in TFA the far more successful manufacturer of self driving cars is saying you're wrong. I think they're in much better position to know than you :)
As an outsider I assumed it took GM a substantial investment just to realize how far out of their depth they were. It made sense to cut their losses once they figured this out.
Having experience and capability to manufacturer cars has approximately zero benefit to create a self-driving software/sensor stack. It would make more sense for Adobe to create a self-driving car than GM.
Cruise was being operated as a separate company though. As a default, GM could have just not done anything and let Cruise operate as if it were independent. Any synergies (personnel, manufacturing expertise, etc) would have just been a bonus. And if they didn't want the financial exposure, they could have spun it out again.
Instead they chopped it up for spare parts, specifically, sending some Cruise personnel to work on deadend GM driver assistance tech and firing the rest. Baffling.
I remember GM cars in Herzliya, Israel with cables and cameras held by duct tape circa 2019 after Andrej Karpathy already presented end to end neural network training for Autopilot in Tesla. Looked like very late to the party.
Cruise was always run as a separate business from GM until they shut it down. I think they got too nervous about committing to the Silicon Valley investment style: high capital, high risk, long time horizon, high reward.
There's no hypocrisy because deep down they never had any principles. Principles they did profess were just marketing. (This applies to Dems as well, while we're at it.)
> This applies to Dems as well, while we're at it.
I disagree with this both-sideism. Democrats are much more in following with norms, where MAGA-era FKA-republicans will through anything aside for their benefit (e.g. Merrick Garland).
Assuming the Dems can get power again we need them to aggressively pursue leftist economic populism. As you can see from the present moment, once principles become inconvenient, they abandon them. So, yes, “both sides”. Being clear-eyed about this can save our democracy.
My hot take is that we’re in the midst of a political realignment. The Democratic Party will be the new center-right “conservative” party, and progressives will be their primary opponents. Assuming MAGA has a dumpster fire collapse when the AI bubble pops and we have the worst market contraction and affordability crisis in decades.
You don’t think a progressive takeover (a lefty Tea Party, if you will) of the Dems is more likely?
I hope progressives can keep up momentum. I worry that if things go back to normal a lot of people who are doing fine (economically) will go back to sleep.
Continued speculation: 0% shot of a progressive takeover given the complete lack of moneyed interest, and the pushback from the core Democrats any time any progressive sniffs national success.
If MAGA fails as a political movement it’ll leave a rightwing power vacuum and the Democratic Party will fill it. The progressives will split and you’ll see center right Democrats going up against left wing Progressives in the north. There will still be some vestigial Republican Party in the south that will be based mostly on anti-woke rhetoric, but won’t have any appetite for the policies that MAGA is famous for after they caused an economic collapse, and will mostly vote alongside the new northern democrats.
They also wouldn't have tried using a holstered weapon as pretense for a public execution which the president doubled down on, until he didn't. They wouldn't be treading on state's rights so openly either. We might be seeing parties flipping, in very short order.
If the Dems pick up on some of the issues the Republicans are neglecting, while maintaining principles* about healthcare access and reproductive rights I expect they'd be the dominant political force in America for some time...if they just had somebody who could man the helm.
One way to view the history of the Republican Party is a power struggle between Wall Street and regional/small business owners. Wall Street understands that the U.S. consumer economy depends on international trade to provide cheap, abundant goods and so supports free trade, immigration of skilled workers, and foreign aid/interventions to further U.S. business interests. For them, the culture war and nationalist rhetoric is a way to get Republican voters riled up but they don't really believe any of it.
The regional/small business owners are always threatened by competition from larger international firms and benefit less from international trade. They believe in the nationalist rhetoric and are opposed to free trade because it undercuts their businesses with cheaper products. They think the U.S. can remain the world's superpower without running a trade deficit and doesn't need to build alliances to maintain its power. This is Trump's base, and their misunderstanding of U.S. power is why they love the idea of tariffs. (good for local producers!) They want to get all the benefits of being a superpower without any of the costs.
Every tax ever implemented by government has been initially sold as a tax on the rich. The people voting for it assume they will never be taxed because they aren't currently rich. But, there is never enough of other peoples money to spend. So, taxes expand and/or increase to include more people.
The original income tax was sold as 1% on mid income and 2% on high income. At the time more than half the country was not going to pay any tax.
Or, no amount of money is ever enough for government and taxing income is a dumb value to collect revenue.
Southern states supported an income tax because they believed it would make it possible to collect revenue for indigent people. Exactly opposite of "taxes the rich".
That is a bit of a lie. Getting a refund does not mean you didn't pay. Even getting more back than you actually paid does not mean you didn't pay. Money is taken from every paycheck. When you file your taxes the IRS decides what you get back.
For the wealthy, or for high earners? I have never seen a proposal for an income tax that grades with your net worth. They only grade with your income.
It was a complicated product that many people worked in order to develop and took advantage of many pre-existing vulnerabilities as well knowledge of complex and niche systems in order to work.
Yeah, Stuxnet was the absolute worst of the worst the depths of its development we will likely truly never know. The cost of its development we will never truly know. It was an extremely highly, hyper targeted, advanced digital weapon. Nation states wouldn't even use this type of warfare against pedophiles.
Stuxnet was discovered because a bug was accidently introduced during an update [0]. So I think it speaks more to how vulnerabilities and bugs do appear organically. If an insanely sophisticated program built under incredibly high security and secrecy standards can accidently push an update introducing a bug, then why wouldn't it happen to Apple?
reply