Then stop deploying camera only systems until that time comes.
Waymo could be working on camera only. I don’t know. But it’s not controlling the car. And until such a time they can prove with their data that it is just as safe, that seems like a very smart decision.
Tesla is not taking such a cautious approach. And they’re doing it on public roads. That’s the problem.
> Hacker News has always assumed this problem is easy. It is not.
That’s the problem right there.
It’s EXTREMELY hard.
Waymo has very carefully increased its abilities, tip-toeing forward little by little until after all this time they’ve achieved the abilities they have with great safety numbers.
Tesla appears to continuously make big jumps they seem totally unprepared for yelling “YOLO” and then expect to be treated the same when it doesn’t work out by saying “but it’s hard.”
I have zero respect for how they’ve approached this since day 1 of autopilot and think what they’re doing is flat out dangerous.
So yeah. Some of us call them out. A lot. And they seem to keep providing evidence we may be right.
I’ve often felt that much of the crowd touting how close the problem was to being solved was conflating a driving problem to just being a perception problem. Perception is just a sub-space of the driving problem.
Genuine question though: has Waymo gotten better at their reporting? A couple years back they seemingly inflated their safety numbers by sanitizing the classifications with subjective “a human would have crashed too so we don’t count it as an accident”. That is measuring something quite different than how safety numbers are colloquially interpreted.
It seems like there is a need for more standardized testing and reporting, but I may be out of the loop.
> achieved the abilities they have with great safety numbers.
Driving around in good weather and never on freeways is not much of an achievement. Having vehicles that continually interfere in active medical and police cordons isn't particularly safe, even though there haven't been terrible consequences from it, yet.
If all you're doing is observing a single number you're drastically under prepared for what happens when they expand this program beyond these paltry self imposed limits.
> Some of us call them out.
You should be working to get their certificate pulled at the government level. If this program is so dangerous then why wouldn't you do that?
> And they seem to keep providing evidence we may be right.
It's tragic you can't apply the same logic in isolation to Waymo.
Freeway accidents, due to their nature, are a lot harder to ignore and underreport than accidentally bumping or scraping into another car at low speeds. It's like using murder rates to estimate real crime rates because murders, unlike most other crimes, are far more likely to be properly documented.
Elon definitely has this cult of personality around him where people will jump in and defend his companies (as a stand-in for him) on the internet, even in the face of some common sense observations. I don't get the sense that anything you've said is particularly reasonable outside of being lured in by Elon's personality.
This is absolutely true. There is a flip side however, where people who dislike Elon Musk will sometimes talk up his competitors, seemingly for no good reason other than them being at least nominally competitors to Musk companies. Nikola and Spinlaunch are two that come to mind; quite blatant scams that have gotten far too much attention because they aren't Musk companies.
Tesla FSD is crap. But I also think we wouldn't see quite so much praise of Waymo unless Tesla also had aspirations in this domain. Genuinely, what is so great about a robo taxi even if it works well? Do people really hate immigrants this much?
I think we’d see praise, but maybe not as much. Every time it’s clear Tesla screwed up it’s an incredibly obvious thing to do to compare them to the number one self driving car out there.
Tesla provides such an obvious anchor point for comparisons it’s really hard for Waymo not to come out on top.
What’s so great about a robotaxi even if it works well? It’s neat. As a technology person I like it exists. I don’t know past that. I’ve never used one they’re not deployed where I live.
It isn't about hatred of the human drivers for me. Waymo's service is so safe and consistent that I would trust my 10-yr-old to take a ride in it solo if it were permitted by the ToS. Most Uber/Lyft/etc. rides are just as safe, but due to the inconsistency I would never reach that level of trust.
I don't live in a covered area, but when I am in range I will gladly pay 10-20% more for a Waymo ride than an Uber/Lyft/etc.
Kind of like how people maintained that LLMs were trash well past the point where it was obvious that that wasn't true anymore, I often wonder how many people who talk confidently about Tesla FSD have actually used a recent version. Because when we tried a recent FSD and Waymo, we found FSD to be excellent in handling pretty complex scenarios, including one of the worst, a busy airport loop, and we found Waymo to behave a bit weirdly (but still good). But FSD clearly isn't the dumpster fire that people try to make it out to be. v12 was a bit sketchy, and I was too nervous to use it past the first couple of times I tried it, but v14 is great.
I didn’t say they were. They do have a bias. I have the same one.
My comment was aimed at the implication that the data might be untrustworthy because they were the ones reporting it.
So I pointed out it wasn’t their data.
As for “spin“ Elon has been telling us for a long time that FSD is safer than humans and will save lives. We appear to have objective data that counters that narrative.
Who said the data was untrustworthy, the source of the article is presenting the data in a highly negative light, which it does in 99% of its articles, so it's a worthless website for reporting data of this sort.
It's basically a few light bumps going at snails pace and probably caused by other cars. The articles headline reads as if it mowed down a group of school children.
In the US, laws were only for the affluent, male, white class anyways. Richard Rothstein wrote a great book about it called "Color of Law: A Forgotten History of How Our Government Segregated America"
It’s an article about the president making memecoins and doing rug pulls to enrich himself. That could also easily be used as a method to transfer very large bribes.
It’s very clearly a comment on the lack of norms or enforcement of rules (emoluments clause, among MANY others) on the presidency.
Not the person you are asking but I would require a better analyzer. It must be able to recognize children in sexual poses, children with exposed genitalia, children performing oral copulation or children being penetrated. If AI can be told to create a thing it should be able to identify that same thing. If Grok can not identify that which it was told to create that is potentially a bigger issue as someone may have nerfed that ability on purpose.
There are psychological books on identifying signs of prepubescence based on facial and genitalia features that one can search for if they are in that line of work. Some of the former Facebook mods with PTSD know what I am referring to.
Leave everything else to manual flagging assuming Grok has a flag or report button that is easy to find. If not send links to these people [1] if in the USA.
> Unfortunately, all of these issues come from humans.
They are. They’ve always been there.
The problem is that LLMs are a MASSIVE force multiplier. That’s why they’re a problem all over the place.
We had something of a mechanism to gate the amount of trash on the internet: human availability. That no longer applies. SPAM, in the non-commercial sense of just noise that drowns out everything else, can now be generated thousands of times faster than real content ever could be. By a single individual.
It’s the same problem with open source. There was a limit to the number of people who knew how to program enough to make a PR, even if it was a terrible one. It took time to learn.
AI automated that. Now everyone can make massive piles of complicated plausible looking PRs as fast as they want.
To whatever degree AI has helped maintainers, it is not nearly as an effective a tool at helping them as it is helping others generate things to waste their time. Intentionally or otherwise.
You can’t just argue that AI can be a benefit therefore everything is fine. The externalities of it, in the digital world, are destroying things. And even if we develop mechanisms to handle the incredible volume will we have much of value left by the time we get there?
This is the reason I get so angry at every pro AI post I see. They never seem to discuss the possible downsides of what they’re doing. How it affects the whole instead of just the individual.
There are a lot of people dealing with those consequences today. This video/article is an example of it.
Allowing something isn't the same as enforcing it to be allowed. If there's regulation, like with ending roaming charges between countries, then it's required to be followed simultaneously across the EU. If there's a directive, like the Working Time Directive, goals of legislation are set out and each member state is required to introduce legislation that implements it. There's also decisions (for one country for one issue), recommendations and opinions (obviously non binding).
There's also the Court of Justice which is the highest court, but only in EU matters. National courts can refer cases to it, or the commission/member states can bring cases against other member states, if they believe they are not following EU law. This would mean either they are not following a regulation, or that the state has not fully/correctly implemented a directive into their own national laws.
As I understand it, there's no specific regulation or directive aimed at gambling itself. There's things tangentially related (data protection, anti money laundering etc). But since there's no regulation or directive saying "gambling must be allowed", there's nothing stopping a member state banning it completely if they so wish.
The only point in which the EU might step in would be if the law was somehow discriminatory or inconsistent (e.g. we ban all foreign gambling sites, but not our own, we ban lottery tickets but not state run casinos, etc).
Germany has regulated it, (though states have slightly different regulations, some states even allowing online gambling, some banning all except the government run lottery).
Technically no, because EU directives aren't applied as written. They're goals for member states to make into national laws, which intentionally leaves them some leeway.
However, national law must reasonably satisfy EU directives, otherwise CJEU could determine that a member state is infringing EU law and fine them until they amend their law.
I’ve never figured out what I think advertising should be. I currently do basically everything I can to get rid of it in my life.
I’m totally fine with outlining targeted advertising. But even classic broadcast stuff poses the dilemma for me.
I have absolutely noticed I miss out some. As an easy example I don’t tend to know about new TV shows or movies that I might like the way I used to. There’s never that serendipity where you were watching the show and all of a sudden a trailer from a movie comes on and you say “What is THAT? I’ve got to see that.”
Maybe some restaurant I like is moving into the area. Maybe some product I used to like is now back on the market. It really can be useful.
Sure the information is still out there and I could seek it out, but I don’t.
On the other hand I do not miss being assaulted with pharmaceutical ads, scam products, junk food ads, whatever the latest McDonald’s toy is, my local car dealerships yelling at me, and so much other trash.
I’ve never figured out how someone could draw a line to allow the useful parts of advertising without the bad parts.
“You’re only allowed to show a picture of your product, say its name, and a five word description of what it’s for”.
It’s clear Apple went out of their way to make Asahi possible in a secure way. I believe people on the Asahi project have said as much.
reply