Google also has aistudio.google.com which is Lovable competitor and its free for unlimited use. That seems to work so much better than gemini CLI even on similar tasks
YMMV, but I've had pretty good luck with just force closing it and launching again when getting errors like that. It doesn't necessarily mean the whole environment is corrupt, even though that is the recovery option that is presented.
It is very unreliable though. I hope Android 17 improves it, as other than the restart issues, I've generally found it to be very functional.
You know what's weird? This is a company that has been using the fleet api for quite a while now to monitor non-professional drivers using FSD on their daily commute, often while distracted doing other things. The latest versions even allow some phone usage.
And yet people are skeptical. I mean, they should be skeptical, given that the company is using this for marketing purposes. It doesn't make sense to just believe them.
But it is strange to see this healthy skepticism juxtaposed with the many unskeptical comments attached to recent Electrek articles with outlandish claims.
I've always assumed that the answer to this would be no. However, I also always assumed that a huge space-based internet system would be both expensive and impractical for bandwidth and latency.
Starlink has largely defied those expectations thanks to their approach to optimize launch costs.
It is possible that I'm overlooking some similar fundamental advancement that would make this less impractical than it sounds. I'm still really skeptical.
Yeah, I remember being amazed at the immediate incremental compilation on save in Visual Age for Java many years ago. Today's neovim users have features that even the most advanced IDEs didn't have back then.
I think a lot of people in the industry forget just how much change has come from 30 years of incremental progress.
TBH, the comments here amaze me. The claim is that a human being paid to monitor a driver assistance feature is 3x more likely to crash than a human alone.
That needs extraordinary evidence. Instead the evidence is misleading guesses.
Waymo studied this back when it was a Google moonshot, and concluded that going full automation is safer than human supervision. A driving system that mostly works lulls the driver into complacency.
Besides automation failure, driver complacency was a big component[1] of the fatal accident that led to the shuttering of Ubers self-driving efforts - the safety driver was looking at her phone for minutes in the lead up. It is also the reason why driver attention is monitored in L2
If rider and pedestrian safety is the main concern, the automated assistance and safety systems that car manufacturers were already developing make the most sense. They either warn or intervene in situations the human may not realize they are in danger and/or do not respond in time. Developing these solves the harder problems first, automation is easy in comparison.
The idea that mostly-automating the system because it's statistically better than humans, but requiring human-assistance to monitor and respond in these exact situations, was flawed logic to begin with. Comparisons of statistics should be made like-for-like, given these are scenarios we can easily control.
For example, a robotic taxis should at least be compared to professional drivers on similar routes, roads, vehicles, and times of day. Not just comparing "all drivers in all vehicles in all scenarios over time" with private company data that cherry-picks "automated driving" miles on highways etc. (where existing assistance systems could already achieve near-perfect results).
Companies testing autonomy on the public should be forced to upload all crash data to investigators as part of their licensing. The vehicles already have extremely detailed sensor and video data to operate. The fact that we have no verified data to compare to existing human statistics is damning. It's a farce.
Sure, but we now have millions of miles of Tesla autopilot and FSD data in the hands of untrained and often semi-malicious end users as well. Out of that data, we've gotten flawed reports from Tesla claiming that it is dramatically safer, as well as independent renormalization that showed it to be at best about the same.
None of those millions of miles resulted in a smoking gun showing the cars to be even 2x worse.
And yet a badly written blog thinks they've shown them to be 3x worse with professional monitors? This is indeed an extraordinary claim.
That... is not really an extraordinary claim. That has been many people's null hypothesis since before this technology was even deployed, and the rationale for it is sufficiently borne out to play a role in vigilance systems across nearly every other industry that relies on automation.
A safety system with blurry performance boundaries is called "a massive risk." That's why responsible system designers first define their ODD, then design their system to perform to a pre-specified standard within that ODD.
Tesla's technology is "works mostly pretty well in many but not all scenarios and we can't tell you which is which"
It is not an extraordinary claim at all that such a system could yield worse outcomes than a human with no asssistance.
Good analysis. Just over a month ago, Electrek was posted here claiming that Teslas with humans were crashing 10x more than with humans alone.
That was based on a sample size of 9 crashes. In the month following that, they've added one more crash while also increasing the miles driven per month.
The headline could just as easily be about the dramatic decline in their crash rate! Or perhaps the data is just too small to analyze like this, and Electrek authors being their usual overly dramatic selves.
That is an overly optimistic way to phrase an apparent decrease in crashes, when Tesla is not being upfront about data that at best looks like it's worse than human crash rates.
Unless one was a Tesla insider, or had a huge interest in Tesla over other people on the road, such spin would not be a normal thing to propose saying.
Media outlets, even ones devoted to EVs, should not adopt the very biased framing you propose.
Previous article: Tesla with human supervisor at wheel: 10x worse than human alone.
Current article: Tesla with remote supervisor: 3-9x worse than human alone.
Given the small sample sizes, this shows a clear trend: Tesla's autopilot stuff (or perhaps vehicle design) is causing a ton of accidents, regardless of whether it's being operated locally by customers or remotely by professionals.
I'd like to see similar studies broken down by vehicle manufacturer.
The ADAS in one of our cars is great, but occasionally beeps when it shouldn't.
The ADAS in our other car cannot be disabled and false positives every 10-20 miles. Every week or so it forces the vehicle out of lane (either left of double yellow line center, or into another car's lane).
If the data on crash rates for those two models were public, I guarantee the latter car would have been recalled by now.
I don’t think statistics work that way. A study of all Teslas and all humans in Austin for 5 months is valid because Electrek ran a ridiculous “study”, and this headline could “just as easily” have presented the flawed Elektrek stork as a legit baseline?
The 10x would be 9x if the methodology were the same. 9x->3x is going from reported accidents to inferred true accident rate, as the article points out.
It is sad, but big sedans do not sell well and the X really needed to be replaced with something completely different. There are now several other 3 row EV SUVs competing with it, and even low volume ones (eg, R1S) outsell it easily.
Don't be surprised if something else takes its place as they do need something larger than Y and less expensive than X was.
They are almost exclusively focused on autonomous cars, humanoid robots, and energy (batteries now, maybe more solar manufacturing later).
As much as I dislike it, I can't disagree with the business case here. They already have >300k monthly subscribers at about $100/month. That business will grow rapidly from here as well as the robotaxi business itself.
Within 2 years, this business will look radically different just because of these two changes.
reply