When you read the fine print you find out that’s for a 2D simulation of a robot arm that isn’t moving. Real world results from coworkers showed it being about 10x slower than Drake in meaningful sims.
This is not exactly the claim. I’m kind of certain they do “fair tests” where they test the same scenarios in different simulators. The problem however is that these tests are not indicative of anything useful. Why should one care if you get 430,000x real time speed simulating a robot arm not colliding or moving any objects with low sim accuracy? This number is fairly meaningless since if you do machine learning (specifically RL) to solve some task (manipulation, locomotion etc) that speed will drop significantly. Moreover in the past most simulators typically benchmark against previous versions of themselves and on scenarios with more collisions / somewhat more realistic situations (picking up a cube).
They test on 3 environments, all with multiple collisions spanning locomotion and manipulation and get some nice speeds. These tests are far more realistic and grounded in reality than the test that made the 430K number.
I believe this is how some robot vacuum companies perfected their algorithms 10-15 years ago; basically simulating inputs to the software from LIDAR/bump sensors and figuring out how to avoid getting the robot "stuck" in your living room.
I have a couple roombas from that era. If I sit and watch them, their path planning makes no sense. But if I just put them on a schedule to clean once a day, and don’t think about them beyond emptying their bin, I have continuously clean floors. Which, for me, is all I care about.
Not GP, but I use a Roborock S8 MaxV Ultra, which was the top of the line model earlier this year. I also just set it to run at night when I’m not paying attention to it. It’s… fine, I guess.
But if there is anything at all on your floor that will get stuck in its rollers, it will get stuck on it. Like 100% of the time. I’ve seen everything. Charge cables, towels, kids toys, any small pieces of fabric, anything you can think of. It has a camera and is supposed to avoid all these things, but it straight up never works. I have a nightly routine where I clear everything I can from the floors to make room for it, and it manages to find the one thing I didn’t see. And looking at its history it always ends up getting stuck in the first 5 minutes which means the whole clean is a bust.
I would wager my overall success rate (nights where it does its whole job and doesn’t get stuck) is maybe 70%. Just good enough that it’s “worth it” but it’s so frustrating that it can’t simply steer around this stuff, especially when it’s advertised as being able to.
I could rant about the other stuff I hate about it, but suffice to say I still feel that good cleaning robots need another 5-10 years before I could fully recommend them.
> It has a camera and is supposed to avoid all these things, but it straight up never works. I have a nightly routine
Try running it in daylight. Mine from Eufy is similar, has a flashlight, but good ambient light is superior. Still, the cameras and image recognition is extremely flaky imo (the Ai parts), whereas the LiDAR for navigation is absolutely spectacular. Even if you move furniture around and drop it randomly in a different room it always finds its current location in less than a minute.
I’m pretty sure mine doesn’t engage the rollers unless it’s on carpet, so that’s something.
Related, my second most hated aspect of it, is that it doesn’t empty its dust bin in mid-clean. Oh, it can empty its dust bin at the end, and it knows how to empty in mid-clean, because it empties it when it’s washing its mop (which it does know to do mid-clean, and you can even configure how many minutes it should go before re-washing.) But noooo, it has no idea that maybe its bin will get full and that it should empty it even without needing to wash the mop.
Because I have a German Shepherd and it can easily fill up its bin with dog hair after 10 minutes of carpet cleaning, and after that it’s just pushing clumps of dog hair around from one end of the room to another.
It’s so frustrating because the engineers did a great job of making the thing able to self-empty its bin in the first place, and thought enough to code for and allow configuration of mid-clean mop washing. But they didn’t connect the dots and consider that some people have large pets and may need the dust bin to get the same treatment as the mop.
iirc, they basically avoided fancy routing algos and just let the robot haphazardly wander the space (and determined that the room was clean after a set number of activations for each bumper sensor)
Full disclosure: I left the company that became iRobot well before the Roomba, so I have zero insider knowledge.
But if you're familiar with Rod Brooks' public work on the "subsumption architecture", the Roomba algorithms are pretty obvious.
Early gen Roombas have 3 obvious behaviors:
1. Bounce randomly off walls.
2. Follow a wall briefly using the "edge" brush.
3. When heavy dirt is detected, go back and forth a bit to deep clean.
Clean floors are an emergent result of simple behaviors. But it fails above a certain floor size in open plan houses.
Later versions add an ultra-low-res visual sensor and appear to use some kind of "simultaneous localization and mapping" (SLAM) algorithm for very approximate mapping. This makes it work much better in large areas. But you used to be able to see the "maps" from each run and they were horribly bad—just good enough to build an incredibly rough floor plan. But if the Roomba gets sufficiently confused, it still has access to the old "emergent vacuuming" algorithm in some form or another.
The newest ones may be even smarter, and retain maps from one run to the next? But I've never watched them in action.
I really like the old "subsumption architecture" designs. You can get surprisingly rich emergent behavior out of four 1-bit sensors by linking different bit patterns to carefully chosen simple actions. There are a couple of very successful invertebrates which don't do much more.
I can totally spend time watching them, rooting for them to randomly (well, one is the random pinball type, one has some kind of camera and makes nice straight lines) pick up a piece of dirt in the middle of the floor. It’s kind of the same feeling as watching a bunch of puppies play.
There are self driving car companies doing the same kind of training. I know because I spoke to an employee who works at a startup that provides this type of software as a service.
Kinda nitpicking, but why is "SIM" allcaps here but not on the source article? It's a "sim" (short for "simulator"), not a SIM card for a cellular phone :'D
reply