"Emergence of the generally agreed upon "next big thing" in AI beyond deep learning. ... What we are seeing is new understanding of capabilities missing from the current most popular parts of AI. They include "common sense" and "attention". "
Right. As I've said on here a few times over the last few years, the big unsolved problem ahead is common sense, defined as getting through the next 30 seconds of life without screwing up. There's been too little progress on this. We need this more than ever, because we now have machine learning systems based on optimization which do the right thing most of the time and something badly wrong some of the time. (I used to speculate on something that made a sheaf of predictions and looked for good and bad outcomes, as an approach to dealing with locomotion over rough terrain. You want to pick moves which have some good outcomes and no really bad fall-off-cliff outcomes. But I was in that too early, in the 1990s, when we didn't have enough compute power for wasteful approaches like that.)
"Despite some impressive lab demonstrations we have not actually seen any improvement in widely deployed robotic hands or end effectors in the last 40 years"
Yes. Manipulation in unstructured environments is still very poor. Brooks' startup in that area, Rethink Robotics, failed on this problem. Mediocre bin picking robots were available in the 1980s, and slightly less mediocre bin picking robots are available today. Amazon has put considerable effort into this without getting good enough performance to deploy.
I had hopes for Willow Garage, which got far enough to fold clothing, but they didn't get beyond that point. It's embarrassing to compare the 1960s video from the Stanford AI lab of a robot assembling some automotive parts with the videos of the DARPA manipulation challenge from a few years ago.
"An AI system with an ongoing existence (no day is the repeat of another day as it currently is for all AI systems) at the level of a mouse."
Yes. I've been arguing for mouse-level AI, or sometimes squirrel-level AI, for a long time.
Small mammals have very good locomotion and manipulation, with brains 1000x smaller than human.[1] We're there on having enough compute power for that size of brain.
That's an indication that this problem requires a different approach, not just bigger models.
I had this argument with Brooks decades ago, when he was promoting Cog and trying for human-level AI in one jump. He said "I don't want to go down in history as the man who developed the world's greatest artificial mouse". Then he went on to develop the Roomba.
> Regarding this point: "Despite some impressive lab demonstrations we have not actually seen any improvement in widely deployed robotic hands or end effectors in the last 40 years" : Yes. Manipulation in unstructured environments is still very poor. Brooks' startup in that area, Rethink Robotics, failed on this problem.
(I worked at Rethink Robotics)
Rethink Robotics was not working on the problem that Brooks is complaining about here. Today’s robots are still using parallel electric fingers or vacuum suction cups, which are much poorer than even the most primitive claws in the animal kingdom. That’s the problem.
Many robot hands have been built. These are rather nice.[1] But they're usually used for teleoperators, not computer-controlled. Usefully controlling them in unstructured situations is still hard.
I once heard a professor say that common sense is knowing when to make an exception. I wonder if this goes against the grain of machine learning systems which seeks to optimize the opposite of the exception.
Introspective definitions are not that useful. That's why I prefer "not screwing up badly" as the key property of common sense. It can be applied to observed behavior, animal or human. It's about having some way to predict the likely results of actions before doing them, and using that information to help choose actions.
An anecdote was shared about a hospital janitor who was admonished for not mopping a patient’s room during the cleaning scheduled hours and it turns out this janitor was considerate about the patient’s family visiting their child, and therefore was doing the cleaning during the off-hours. The janitor was exercising common sense in consideration and empathy for the family.
If the heuristic is not to screw up badly then we would all be selfish.
There's a small factual error here, Brooks says Artemis II will launch in May 2004, while Artemis III will land on the Moon in 2025.
According to NASA's Inspector General [1], Artemis II can launch no sooner than 27 months after Artemis I, which would push the mission into 2025. Similarly, the Inspector General believes the first Moon landing can come no sooner than 2026. [2]
Note also that this Moon landing depends on a completely untested technology (in-orbit fuel transfer); the landing craft will require 4-8 additional launches to top off its tanks. [3]
Personally, I would be shocked to see a lunar landing before 2027. There are just too many things that would have to go right on the first try to make a 2026 deadline.
One point Brooks touches on that I'm a bit surprised doesn't get more discussion is hands-off autonomous highway driving in some form to some degree. Long distance highway driving--a fair bit of which can be in fairly light traffic--would be a big win for a lot of people. No it doesn't address the wishes of those who don't want to learn to drive or own a car, but it's still a big convenience (and presumably safety) win and seems a lot easier in general.
His point of point to point 1 hour deliveries (edit: transport was his word, but deliveries would be a prerequisite, no?) anywhere on earth seems pretty dismissive. Also much of the space section is about mars. But there’s a lot more going on in space than mars.
JWST started its science mission around July 2022. Researchers get a year of study time before their data are released. So we can expect to see an increase in research releases from JWST until at least that point. I suspect this will continue to remind the world space has more than mars in it.
There’s the psyche asteroid mission that I really wonder about long term.
There’s also this seemingly unexpected opening to discuss and research UAP and interstellar objects from the last several years. I think the count is at two objects that the Avi Loeb and his group will look to recover soon.
Then there’s the research RocketLab appears poised to open up. They’re planning to launch a probe to Venus in March, for instance. Lowered launch costs could make this sort of thing do-able by universities. While those projects won’t have NASA kind of funding, they also won’t have NASA/congressional kind of bureaucracy.
I could probably go on, but it’s an ok list if a little missing something.
Edit-on cars, it appears the news nowadays is all things electrifying. But his points are almost exclusively self driving. Where’s the discussion of battery tech? Charging infrastructure? Charging infrastructure is probably low-lying fruit for someone trying to describe the future as it’s a pretty straightforward engineering problem-with a known scale, for instance.
> But his points are almost exclusively self driving. Where’s the discussion of battery tech?
In 2018, "everyone" was saying perfect self-driving was only a few years away (I recall a highly-rated HN comment saying humans would likely be banned from roads by 2025), and nobody was interested in battery tech.
> In 2018, "everyone" was saying perfect self-driving was only a few years away.
Yes. Having been involved in the DARPA Grand Challenge 20 years ago, I didn't think it would take this long. I thought we'd at least have automated rental car pick-up and return at airports by now. At least lots of automated shuttle buses on campuses and at airports.
Waymo is getting close.[1] It's worth looking at the accident reports and news stories with complaints about Waymo. People complain about Waymo cars clogging their street, which is happening because SF marked a few nearby streets as "slow streets", and Waymo's planner is optimizing routes around them. There's a recent news story where a Waymo car reached a construction trench in a street and stopped. Didn't go into the trench. Someone had to intervene remotely to back it out of the construction area.
There was a collision on Geary Boulevard where a Waymo vehicle detected a speeding car during a left turn and stopped, rather than accelerating to escape. That's their worse accident so far, and the other driver was at fault. There are not stories about Waymo cars making sudden lane changes and causing multi-car collisions.
[1] https://waymo.com/sf/
> There was a collision on Geary Boulevard where a Waymo vehicle detected a speeding car during a left turn and stopped, rather than accelerating to escape. That's their worse accident so far, and the other driver was at fault.
The most common single problem with serious autonomous vehicles right now is being rear-ended by human vehicles while cautiously entering intersections. The usual sequence of events is, autonomous vehicle approaches intersection where the view of the cross street is obstructed, autonomous vehicle starts to enter intersection, detects cross traffic, stops, and is rear-ended. There are dozens of those in the California DMV reports.
This will probably get better as more vehicles get automatic radar-controlled braking, which is good at preventing low-speed rear-ending.
Possible solution: when an self-driving car is entering a situation where it is likely to stop suddenly, and it is being followed too closely, flash the brake lights rapidly, even though moving forward.
Problem should eventually solve itself when people realize tailgating is illegal and automated cars have them on recorded cameras. Just enforce the already existing laws against wreckless, dangerous driving.
There are a lot of scenarios where it's low volume/you want a car at the other end/you're not going to a city center/etc. I don't necessarily disagree that it would be nice if there were better passenger train service in the US but most of the long distance highway driving I do wouldn't be remotely served by train.
In a research lab I used to work in, my colleagues publishing papers could explain to me the core idea in 2 minutes and probably an undergrad freshman could also understand it. However, if you then read the paper covering the same idea, it would take hours to decipher a fuzzier version of the idea and a freshman undergrad probably wouldn’t bother reading it.
> Point to point transport on Earth in an hour or so (using a BF rocket).
This is just question of money.
All necessary technologies already exist, just very expensive, so even military avoid them.
For example, imagine, we decide to use first stage of F-9 to suborbital transport to ~12000-14000km.
It will have payload capacity ~ 10 metric tonnes.
Cost per launch could be be approx half of space launch (because second stage costs comparable to first), so ~$33mln, so 3.3mln per metric tonne, or 3300$ per kg.
I know, Iphone cost more than 3300$ per kg, but even planes cost less then 100$ per kg, and cars ~10-20$ per kg, and current commercial shipping by plane costs ~50$ per kg from China to Europe (other continents not much more).
Daimler reps already given their answer to trolley problem. - "Car safety systems will give priority to save people inside machine, and save outside only if this will not affect inside safety".
In most imminent accident scenarios the optimum solution is probably going to be to apply brakes with maximum force. Once you get into calculating odds for complex scenarios involving lane departures, etc., there's a good chance you'll just make things worse.
That and the fact that no one is buying a car from a manufacturer that is optimizing outcomes for people who don't own the car.
The one time in my life I've ended up doing a high speed swerve (onto the shoulder) to avoid a collision, the only reason I had to do so was some clown who wasn't paying attention ended up swerving to avoid the car they were about to rear-end. Somewhat miraculously for all the vehicles threatening to play pinball at 65 miles per hour, no one actually collided.
But must say, this is complex problem, not just car safety systems involved.
- Standard current EU practice, they constantly analyze incidents statistics, and looking for places, with much worse statistics than middle, and trying to make some changes, to make those places safer.
For example, in EU normal practice, to rebuild shape of road intersection, so there will not exist blind zones, where you don't see vehicles, approaching from other directions.
Other example, they trying to make road transparent, so you will see far, when appear transport congestion, and you have more time to slow down.
This guy is so weirdly pessimistic about the future. I don’t understand what you get by being right about predicting omission. Doesn’t seem to be helpful.
Maybe it’s because he doesn’t stand to inherit the future others seek to build
How is he weirdly pessimistic? He's consistently right.
Personally, I would be happy if there were fewer people moving into overheated areas. Every person working on crypto, quantum computing, AI, self-driving cars, virtual reality (among other things!) is a person who isn't doing something tangibly and immediately useful for society. And it's clear most of those people are in it for purely venal reasons: they want a big payday. It's not good to have so many bull shit artists in a society.
I've nothing against people working on long-term--and possibly speculative--problems. It's when they basically lie about the prospects as, for you say, a payday.
"This guy" has been inventing the future for a while [0]. Some of the earliest walking, self-balancing robots are ones of which he lead the development. He founded iRobot (amongst other companies) which has sold over 30 million home robots. [1] He's been part of the real world implementation of AI for decades.
Maybe because he works in a relevant field for seeing certain technologies be consistently overhyped and overpromised? Also because he has really good point about predicting the future of tech whose limitations we don't really understand. It becomes a way for us to make magical predictions. AGI can do anything if you don't understand what its limits might be. Cue Ray Kurzweil.
So we should all just hype things we don't actually think will happen (at least in a given timeframe)? Perhaps so we can get in on the grift as seems to have been the case in at least some cases with autonomous driving.
It's one thing to work towards ambitious goals. It's another to mislead people about what is realistically possible.
Right. As I've said on here a few times over the last few years, the big unsolved problem ahead is common sense, defined as getting through the next 30 seconds of life without screwing up. There's been too little progress on this. We need this more than ever, because we now have machine learning systems based on optimization which do the right thing most of the time and something badly wrong some of the time. (I used to speculate on something that made a sheaf of predictions and looked for good and bad outcomes, as an approach to dealing with locomotion over rough terrain. You want to pick moves which have some good outcomes and no really bad fall-off-cliff outcomes. But I was in that too early, in the 1990s, when we didn't have enough compute power for wasteful approaches like that.)
"Despite some impressive lab demonstrations we have not actually seen any improvement in widely deployed robotic hands or end effectors in the last 40 years"
Yes. Manipulation in unstructured environments is still very poor. Brooks' startup in that area, Rethink Robotics, failed on this problem. Mediocre bin picking robots were available in the 1980s, and slightly less mediocre bin picking robots are available today. Amazon has put considerable effort into this without getting good enough performance to deploy. I had hopes for Willow Garage, which got far enough to fold clothing, but they didn't get beyond that point. It's embarrassing to compare the 1960s video from the Stanford AI lab of a robot assembling some automotive parts with the videos of the DARPA manipulation challenge from a few years ago.
"An AI system with an ongoing existence (no day is the repeat of another day as it currently is for all AI systems) at the level of a mouse."
Yes. I've been arguing for mouse-level AI, or sometimes squirrel-level AI, for a long time. Small mammals have very good locomotion and manipulation, with brains 1000x smaller than human.[1] We're there on having enough compute power for that size of brain. That's an indication that this problem requires a different approach, not just bigger models. I had this argument with Brooks decades ago, when he was promoting Cog and trying for human-level AI in one jump. He said "I don't want to go down in history as the man who developed the world's greatest artificial mouse". Then he went on to develop the Roomba.
[1] https://en.wikipedia.org/wiki/List_of_animals_by_number_of_n...