This is not correct. A business this big would definitely be using accrual accounting (not cash) which generally means you count the revenue when the actual ownerships transfers to the buyer. Since the truck was operated by the seller, the transfer of ownership is almost certainly counted as when the buyer receives the goods.
Honestly my impression was the “nines” of reliability just means how many nines your reliability starts with, as a decimal. I never thought much about it though.
I will also say it’s amusing that the debate is between one and two nines. Neither is objectively great. If you built a system with >3.65 days of downtime in a year that wouldn’t be something you’d brag about in an interview.
I used a first-gen eeepc with Linux in college. I didn't have any problems with speed for normal use, though I ssh'd into servers for anything more intensive than running a browser.
Thanks for replying - so its used as a generic catch-all term internally? Did previous DoD secretaries use it in speeches? I thought they used bureaucratic terms like service member. I guess that doesn't work in casual conversation...
A "world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human" would be a world in which people could easily create humanity-ending bioweapons. I would love to live in a less vulnerable world, and am working full time to bring about such a world, but in the meantime what you describe would likely be a disaster.
I think it is much more likely they will be (and are) generating protorealistic images of ther favourite person (real or fictional) with cat ears. Never underestimate what adding cat ears does.
OK, maybe someone will build a bioweapon that does that for real. :P
There are plenty of physical and legal barriers to creating a bioweapon and that's not going to change if everyone becomes smarter with AI. And even if we really somehow end up in a world where everyone has a lab at home and people can easily create viruses, they can also easily create vaccines and anti-virals. The advancements in medicine will outpace bioweapons by a lot because most people are afraid of bioweapons.
Intelligence itself is not dangerous unless only a few orgs control it and it's aligned to those orgs' values rather than human values. The safety narrative is just "intelligence for me, but not for thee" in disguise.
There mostly aren't physical barriers. Unlike nukes, where you need specific materials and equipment that we can try to keep tabs on, bioweapons can be made entirely with materials and equipment that would not be out of place in an academic or commercial lab. The largest limitation is knowledge, and the barriers there are falling quickly.
Symmetry is not guaranteed. If someone creates a deadly pathogen with a long pre-symptomatic period (which we know is possible, since HIV works this way) it could infect essentially everyone before discovery. Yes, powerful AI would likely rapidly speed up the process of responding to the threat after detection, especially in designing countermeasures, but if we don't learn about the threat in time we lose.
There are people today who could create such a pathogen, but not many. Widespread access to powerful AI risks lowering the bar enough that we get overlap between "people who want to kill us all" and "people able to kill us all".
This is not a gotcha argument, this is what I work full time on preventing: https://naobservatory.org The world must be in a position to detect attacks early enough that they won't succeed, and we're not there yet.
For every person that thinks about creating the HIV-like deadly pathogen, there will be millions more thinking about how to defend people against such pathogen, how to detect it faster before symptoms arise, how to put up barriers to creating them, and possibly even how to modify our bodies to be naturally resilient to all similar pathogens. Just like what you're doing here. I don't think we should mark knowledge or intelligence itself as the problem. If that's true then we should be making everyone dumber.
We were woefully under prepared for COVID despite many people predicting that very event. At the very least, we should have had stockpiles of PPE from the beginning.
It's not enough for a handful of people to predict something. You have to get the entire nation onboard to defend against it.
This is just not thinking clearly. There are bad things that are asymmetric in character, dramatically easier to do than to mitigate. There’s no antidote or vaccine to nuclear weapons.
This is exactly the thinking that has characterized responses to new sources of power through history, and has been consistently used to excuse hoarding of that power. In the end, enlightenment thinking has largely won out in the western world, and society has prospered as a result.
Centralizing power is dangerous and leads to power struggles and instability.
It is not easy to create weapons. Why do you think the physical and legal barriers that exist today that prevent you from acquiring equipment and creating nuclear weapons will go away when everyone becomes smarter?
Really very sure that wasn't one of the conditions. I didn't remember that from 2012, and looking now it wasn't included in the merger agreement. They did write:
> We believe these are different experiences that complement each other. But in order to do this well, we need to be mindful about keeping and building on Instagram’s strengths and features rather than just trying to integrate everything into Facebook.
>That’s why we’re committed to building and growing Instagram independently. Millions of people around the world love the Instagram app and the brand associated with it, and our goal is to help spread this app and brand to even more people. -- https://about.fb.com/news/2012/04/facebook-to-acquire-instag...
I can see the argument if you’re familiar with poetry terms, then of course that naming makes sense, but I think proper names occupy a different part of the brain for people which inhibits the ability to make that connection. But also the jump from sonnet to opus is not as big as haiku to sonnet even though the names might imply such a jump (17 syllables -> 14 lines -> multi page masterpiece does not capture the difference between the models)
> I can see the argument if you’re familiar with poetry terms,
I think they mean "if you're familiar with Anthropic's family of models". They've had the same opus > sonnet > haiku line of models for a couple of years now. It's assumed that people already know where sonnet 4.6 lands in the scheme of things. Because they've had that in 4.5, and 4.1 before it, and 4 before it, and 3.7 before it, etc.
That it's 70 remote assistance people for 3,000 cars is pretty good counter-evidence to the "they're not driverless, they're remote controlled" claims.
70 active on average at any given time per the article, which then lists total fleet size, as opposed to number of active cars on average, so it's not a fair comparison.
Although then it says they drive about 4m miles per week, which works out to 57,000 miles per active RA agent per week. A person driving ~25 mph on average 24/7 would do ~4000 miles in a week (and we can assume 24/7 here because they reported active agents, so we assume a team of ~3 people swapping out as driver in this hypothetical).
So that gives you a car/operator ratio of at least 14, and probably more since I bet the average speed is less than 25 mph.
I think anyone who goes for a drive in Los Angeles can attest that there are way mo than 70 cars active at any point. It's not unusual to see multiple Waymos at intersections.
Also, the average speed is way less than 25 mph, considering it may take 30 minutes to go 3-4 miles in city traffic.
Yeah that sentence struck me as very carefully worded. They also don't mention how often RA is needed or invoked. We'll encounter a lot of these autonomous systems (cars, robots, equipment) that escalate decisions and edge cases to human employees until they are trained enough that reliability goes up.
It's tricky to give a number for "RA required" that isn't wildly misleading, or contextualize one you're given. The common case for most AV RAs is confirmation of what the vehicle already has planned. Does that count as "required"?
An AV company can also tune how proactive vehicles are in reaching out to RA for confirmation, which is a balancing act between incident rate, stoppages, RA availability, and rider metrics. There's other ways to tune RA rate by also adjusting when and where the vehicles operate, which comes down to standard taxi fleet management tools (e.g. price and availability).
Waymo chooses a target that they're comfortable with and probably changes it every so often, but those numbers aren't the only possible targets and they're not necessarily well-correlated to the system's "true" capabilities (which are themselves difficult to understand).
The remote control claim never made sense anyway. "There is no computer driver, it's all fake, they're paying teams of drivers in India" only sounds plausible to anyone who's never encountered lag in a video game.
> Our vehicle-to-RA connection is also as fast as the blink of an eye. Median one-way latency is approximately 150 milliseconds for U.S. based operations centers and 250 milliseconds for RA based abroad.
That's still not fast enough for remote control, but are they implying they only send the RAs screenshots, since sending video would take seconds, not milliseconds?
>That's still not fast enough for remote control, but are they implying they only send the RAs screenshots, since sending video would take seconds, not milliseconds?
Their earlier blog post has screenshots (?) of the UI that the "fleet response" people have access to. It seems to be a video feed combined with yes/no questions, along with some top-down UI to direct where the vehicle should go.
Their claim is talking about latency, not bandwidth. What you're talking about is throughput, which can usually be solved by throwing more money at the problem.
I'm semi-seriously expecting remote-work Optimus driving to be a thing in the near future.
Well, more so than it already is.
A factory in the US* staffed entirely by humanoid robots has the same impact on US employment opportunities regardless of if the robots are controlled by AI in the sense of software or in case where the "Actually Indians" meme still applies.
It's just that in the latter case your "illegal aliens" who are "stealing our jobs" are managing to do so without actually crossing the border, making it very difficult to deport them, and denying them access suddenly becomes a freedom of speech issue.
* I'm in Europe, I don't think we'll be tolerating "new" "exciting" "opportunities" from Musk any time soon. I don't think China or Russia will be either. Or indeed more than half of the G20 nations. He'll be told to prove it, and get told "no" a lot because experiments based on his rhetoric and vision are no longer worth the downsides without solid proof both that it works as advertised and that he won't cut things off when he has a hissy fit.
You can stream video with milliseconds of latency, provided you have enough bandwidth for the video stream. Videoconferencing and cloud gaming both work on this principle.
That said, I would argue that their focus on one-way latency is misinformation meant to make the picture look rosier than it actually is. Round-trip latency is what matters here -- the video feed needs to get to the assistant, then the assistant needs to react, then their response needs to get back to the car. If one-way latency is 250ms, then round-trip latency would presumably be 500ms, which is a very long time in the context of driving. At highway speeds, you'd travel ~44 feet / 13 meters in that time.
They don’t do human in the loop at highway speeds.
Further the cars need to safely stop in an emergency without human intervention. There’s no way for the car to first notice a problem, then send a message to a call center which then routes to a human, and for that human to understand the situation, all fast enough to avoid a collision. Even 50ms is significant here let alone several seconds.
That's not an achievement. Even a non intelligent low to mid end compact SUV such as a 2024 Mazda CX30 has cruise control that can detect cars stopped ahead to slow down, stop if necessary, and continue when the car in front starts moving.
I'm just saying that "it avoids a collision" by not ramming into people or cars is table stakes and it makes us look incompetent if we tout it as a flagship feature.
You say that but we’ve had cars that can do what you describe for a decade and yet actual autonomous driving is still waiting.
Not failing due to a software or hardware issue is way more complicated than just usually working.
Avoids a collision is similarly way more difficult than just detecting a stopped car. What needs to happen when a car blows out a tire at speed isn’t just slam on the breaks for example. At scale cars need to adapt to the conditions and drive defensively not just watch what’s directly in front of them.
>That said, I would argue that their focus on one-way latency is misinformation meant to make the picture look rosier than it actually is. Round-trip latency is what matters here -- the video feed needs to get to the assistant, then the assistant needs to react, then their response needs to get back to the car. If one-way latency is 250ms, then round-trip latency would presumably be 500ms, which is a very long time in the context of driving. At highway speeds, you'd travel ~44 feet / 13 meters in that time.
Right, which is why the blog post is titled "Advice, not control ..." and goes to explain that they're not relying on the "remote assistance" people to make split second judgements.
It reminds me of the claims that your phone's microphone is always on and feeding your conversations to Facebook so they can serve you ads, even when the app is closed.
Anyone who has experience with apps, permissions, or even basic reverse engineering or network activity monitoring would realize that this couldn't be true without someone having found evidence.
Yet even on HN you find die-hard believers that it's true. I think these stories tickle the conspiracy theorist part of some people's brains and they want to believe it's true. If it's true, it means they were smart enough to see through the facade unlike the other sheep in the world.
> I think these stories tickle the conspiracy theorist part of some people's brains and they want to believe it's true.
It's more than just that, there's also the Baader–Meinhof phenomenon, i.e. we talk about something and then notice it mysteriously suggested to us on YouTube or whatever. And then assume this was causal rather than coincidence.
I got that yesterday, though I can't even remember now what the video or the topic of discussion was.
Interestingly, the round-trip latency from the West Coast to continental Asia isn't nearly as long as I'd assumed (60ms to 250ms, depending on who's measuring).
Not nearly fast enough for real-time highway remote operation IMHO, but surprisingly fast. That's what I get for underestimating how fast light and electric fields can go.
I've got people in my social network who firmly believe that every car is, in fact, "driven by Indonesians". Apparently a widespread belief.
I've pointed out that these vehicles are quickly become more prevalent, here and (especially) in China. To which the counter is that there plenty of Indonesians to go around.
But presumably most of the 3,000 cars are on the road at any given time? In which case the point stands, namely, that their remote operations people can't be the ones driving the cars because there aren't enough of them on duty at any given time; therefore, the cars really do drive themselves. (Which I would have thought was never in doubt, but I suppose some people are really determined to be skeptics.)
You can probably infer the average number of active cars from trips and utilization metrics, which are out there (at least for California I believe they report this).
E.g. 450,000 trips/week * 15 min/trip / 0.56 loaded:empty miles / (24760) ~= 1200.
They’re not “no human in the loop” driverless. They’re just on autopilot, same as any airliner. We don’t call planes that takeoff and land themselves “pilotless”, because there’s humans in the loop. Waymo must be rather defensive about being called out for merely having autopilot cars, which is weird because that’s rather miraculous in historical terms — but certainly the generic term “autopilot” is a much less distinctive claim to success than “driverless”.
They are actually "no human in the loop" driverless most of the time.
If an airplane did not have a human inside the airplane and they only "dialed in" for extraordinary events, then yes I do think we'd call them pilotless.
Anyway Waymo, to my knowledge, doesn't use the terms "driverless" nor "autopilot." They claim that they are creating an artificial driver or that their cars are autonomous. There's something driving the car, it's just not a human driver, ergo it's not "driverless."
Autopilot in planes is much closer to cruise control than it is to a Waymo. This is of course the purported rationale behind Tesla's use of the name for their L2 feature. Both require a human operator available and monitoring at all times.
The aeronautic equivalent of Waymo is a fully autonomous UAV. A human might be needed to set high-level goals, but all of the actual flying/driving is done by the machine.
Pilots in a plane on autopilot are never out of the control authority of the plane (by which I mean: "ready to take over at a moment's notice"). Driverless AVs do drive without perpetual eyes-on oversight. The FAA would never allow that for commercial planes.
Autopilot in planes does not handle takeoff. Pilots still do that. Traditional autopilot was mostly just to keep the plane flying straight. Capabilities have improved over time, but it still doesn't fly the plane the way Waymo drives itself.
Notably, I believe the Cessna stall parachute in certain planes is autonomous by most definitions, not the least of which because it fires without pilot input and because it takes control of the plane away from the human to do so. I recognize this is a rather, uh, odd use of ‘autonomous’ — but it does technically serve as a pilotless emergency landing mechanism :)
That's rewriting history. What they said at the time:
> Nearly a year ago we wrote in the OpenAI Charter : “we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research,” and we see this current work as potentially representing the early beginnings of such concerns, which we expect may grow over time. This decision, as well as our discussion of it, is an experiment: while we are not sure that it is the right decision today, we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas. -- https://openai.com/index/better-language-models/
Then over the next few months they released increasingly large models, with the full model public in November 2019 https://openai.com/index/gpt-2-1-5b-release/ , well before ChatGPT.
> Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT‑2 along with sampling code (opens in a new window).
"Too dangerous to release" is accurate. There's no rewriting of history.
> Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.
I wouldn't call it rewriting history to say they initially considered GPT-2 too dangerous to be released. If they'd applied this approach to subsequent models rather than making them available via ChatGPT and an API, it's conceivable that LLMs would be 3-5 years behind where they currently are in the development cycle.
reply