Hacker News new | past | comments | ask | show | jobs | submit login
Extreme parkour with legged robots (extreme-parkour.github.io)
230 points by modeless 7 months ago | hide | past | favorite | 190 comments



"A single neural net policy operating directly from a camera image, trained in simulation with large scale RL, can overcome imprecise sensing and actuation to output highly precise control behavior end-to-end."

So that's what it takes. That's so much simpler than the way Boston Dynamics does it, working out all the dynamics in simulation first. It's amazing to see this done from vision to actuators in one net. It's now much less of a mystery how animals managed to evolve the ability to do that.

(I had a go at this problem years ago. I was trying to work out the dynamics from first principles, knowing it would be somewhat off. Then use some self-tuning (a predecessor to machine learning) to fine-tune the thing. Got as far as running up and downhill in 2D in the early 1990s. About one hour of compute for one second of motion.)


Of course, the details of how to actually implement something like this are way more complex than "just throw everything into a big neural net with images as the inputs and actuators as the output". You need to provide the right kind of guidance in order to learn a usable policy in any reasonable amount of time.

A very recent development (which this work builds on) is the idea of "online adaptation". It essentially involves doing the training in two stages:

1. You add a variety of dynamically-varying environmental effects to your simulator, by randomly altering parameters such as ground friction, payload weight distribution, motor effectiveness, and so on. You give the motion controller perfect knowledge of these parameters at all times, and let it learn how to move in response to them.

2. Then, you remove the oracle that tells the controller about the current environmental parameters, and replace it with another neural network that is trained to estimate (a latent representation of) those parameters, based on a very short window of data about the robot's own motor commands and the actual motion that resulted from it.

All of this can be done in simulation, many times faster than real-time. But when you transfer the system to a real robot, it adapts to its environment using the estimated parameters, without any of the networks needing to be re-trained. This ends up making it pretty robust to difficult terrain and perturbations. It also has the benefit of papering over subtle differences that arise between the simulated and real-world dynamics.

This paper adds a lot of additional refinements to the same basic idea. In the first stage, the system is given perfect knowledge of its surrounding terrain and the locations of some preselected waypoints, and learns to follow them. The second stage replaces those inputs with estimates derived from an RGB+depth camera.


I guess this is why evolution is taking unreasonable amounts of time.


From your perspective. In reality "amount of time" is quite abstract. Million years can still be "a blink" for an entity with different perception of it.


Assuming time has no end, what would be a reasonable amount of time ? Let’s say you could evolve at 10000x the speed ? So what ?

Where do you need to be ?


> Where do you need to be ?

Where is debatable, but you definitely need to be there by end of Q3 or we miss P&E targets.


Well at the very least you need to evolve at a rate which can keep up with environmental changes.


Out of the solar system by the time our Sun dies


7/8 billion years...


Earth's ability to support lifeforms such as human beings is expected to end in the next 500 million to 2 billion years. Granted, that's several orders of magnitude longer than we've been around, but still - the sun will still be shining when humans go extinct, assuming we don't find another home planet and manage to get ourselves to it. It will be billions of years afterward that the sun will actually die.


> "A single neural net policy operating directly from a camera image, trained in simulation with large scale RL, [...]

> So that's what it takes. That's so much simpler than the way Boston Dynamics does it, working out all the dynamics in simulation first.

I mean, it sounds like they have had to work out all the dynamics, and simulate them.

Admittedly they don't have to do it on-robot in real time any more.


Doesn’t the use of RL here imply some level of simulation?


Yes, but the key is the RL simulation can be simple & unrealistic since it's used to train offline, not to plan & control live: a control systems approach like the Boston Dynamics requires excellent physics modeling because that's the only way it can do planning or adjust for errors while controlling the actual robot in real time, while the NN is trained to cover many possible physics during development, and at runtime in an actual robot, it just shrugs and adapts to whatever crazy scenario it finds itself in.


That second paragraph: holy mother of exaggerated claims!


It’s pretty incredible how animalistic its behaviors are becoming (hesitating for a moment at the edge of a lip, coiling it’s back legs for increased actuation, and when it almost misses the high jump and moves its back leg really fast multiple times to give itself a tiny boost each time until it recovers). Is that because it’s trained on animal behaviors? Or is this emergent behavior and animals just also do it more or less optimally already?


It trains from scratch (no imitation-learning or hand-written policies to bootstrap off of), so it's all emergent: https://extreme-parkour.github.io/resources/parkour.pdf#page...


it suggests that animals learn using similar neural nets.


Or it simply suggests that this is advantageous behaviour and that different learning methodologies would result in the same behaviour.


I'm sure this is impressive for an autonomous robot. However, as a fan of real parkour its kinda annoying to see some modest jumps and walking on a slope labelled "extreme parkour". What the robot demonstrates I'd expect any healthy 10 year old to be able to equal.


The metrics they're using (2x its height for climbing a wall, 2x its length for crossing a gap) are weird and don't really relate to the same achievements for a traceur. 2x its height is really more like slightly over 1x its usable body for that maneuver (0.4m length, 0.51cm height of the climb). I agree, not extreme but still pretty impressive for a robot. We're not going to see them doing cat leaps any time soon ;)


Relative size of leaps doesn't seem to be a particularly useful metric for assessing the NN performance anyway, since in the absence of the human constraint of fear it's really limited only by the mechanics of its legs and its weight.

More impressive would be adaptation to obstacles without clearly delineated edges, sticking landings on uneven/moving landing sites, and especially avoidance of landing sites which appear incapable of supporting its weight properly, particularly if it could do it well enough to generalise to novel courses.

The video hints the model may be able to do this to some extent (the high jump does show some apparently necessary compensating movement to avoid slipping off), but doesn't really demonstrate it.


I'll extend 'avoidance of landing sites which appear incapable of supporting its weight properly' to 'avoidance of landing sites which are off-limits, including living things'

E.g. A sleeping dog that is motionless, a valuable item, wet concrete.

I suppose I'm going out-of-scope, though.


I'd watch the video of one of those stepping on a sleeping dog tbf...


Totally agree with this! However, it's common for research papers to use exaggerated language to emphasize their breakthroughs and accomplishments. :)


Given the title, the video was indeed a bit disappointing but I imagine that “extreme” is ironic or at least relative.


I imagined a reference to The Office


its actually hard for robots to compete with 10 year olds


For sure, but the paper is disingenuous in their description of the challenge they are supposedly tackling compared to what they actual achieve. Here's an excerpt from the introduction of the paper:

"Parkour is a popular athletic sport that involves humans traversing obstacles in a highly dynamic manner like running on walls and ramps, long coordinated jumps, and high jumps across obstacles. This involves remarkable eye-muscle coordination since missing a step can be fatal. Further, because of the large torques exerted, human muscles tend to operate at the limits of their ability and limbs must be positioned in such a way as to maximize mechanical advantage. Hence, margins for error are razor thin, and to execute a successful maneuver, the athlete needs to make all the right moves. Understandably, this is a much more challenging task than walking or running and requires years of practice to master. Replicating this ability in robotics poses a massive software as well as hardware challenge as the robot would need to operate at the limits of hardware for extreme parkour."

Here's an example of real parkour: https://www.youtube.com/watch?v=5lp1oS0vXg0

They aren't doing parkour, let alone "extreme" parkour.


Two years ago, the Boston Dynamics Atlas could do mid-air somersaults - that's better than most 10-year-olds. https://www.youtube.com/watch?v=tF4DML7FIWk


But worse at running then any able-bodied 10-years-old. Comparatively, a somersault is quite easy/well-controlled movement.


But how amazing is it that this is even a thing to consider today


Well I was very conservative in choosing a healthy 10 year old as a point of comparison. A trained 10 year old can do a lot more. This guy was 7 at the time of filming: https://www.youtube.com/watch?v=1c__4ETI7aM

I'm not trying to diminish the accomplishment with respect to the state of the art of robotics, just bring some reality to the comparison against what humans are capable of.


Whilst a lovely achievement, this title definitely overstates current robot abilities. Here is some human parkour for contrast:

https://m.youtube.com/watch?v=QHqAVaQqQWQ&t=106s


The difference being each human has to spend thousands of hours learning that level of control over their bodies, and work hard to maintain that level of physical fitness. And if they fall off the roof, millions of dollars of potential earnings and GDP die with them. Whereas these robots are only $70k, and once one of them can do this, they all can do it. Just like with Chess and Go. It’s not impressive at first, then a couple years later, it’s better than humans could ever be, and it can be cheaply replicated ad infinitum.


Yes, that's very true. Success for one robot means success for a whole bunch of robots. However, success for one Olympic athlete does not mean everyone can achieve the same level. That's the main difference.


I think the main difference is these robots aren’t individuals like us. They don’t and won’t have unique experience.

They are essentially one “organism”. They are effectively the same height, same weight, same battery life, same way of processing information, same capabilities.

There is zero uniqueness. Like a single machine with a thousand eyes and arms.

If they develop uniqueness it would get wild, like the movie rogue one…

If a robot is damaged it would be interesting though. Would it adapt ? If it adapts would it be part of the same fleet? Could it receive the same “updates”, or would it be an individual? It would the able to do certain things others bots of the same model could do…


* not be able


The most impressive thing about the video is not the skill but the control of their fear.


Not that have experience, but I feel like that's just practice. Your subconscious needs to learn that it's not necessarily going to die when it does that, and your subconscious learns from experience, so if you did it 10 times and nothing went wrong, you'll be less scared.


I experienced this with skiing, where [1] you need to look down the hill and lean forward amongst other things for best results, things your fear reflexes want to avoid at all costs at first.

But with the parkour on top of city buildings that is more extreme than what they portray on rooftop chases in a Bourne Identity movie... hmm... I think that is another level. There is a good chance you will die if you do that alot over 10 years. It probably requires some letting go of ego to the point where you are happy to die early. Like the free and solo climbers.

[1] Forgive me if you are a great skiier this is my dumb interpretation!


Skiing is a unique sport in this regard, different to snowboarding in the way one needs to stand.

With Skiing, you have to make yourself quite vulnerable. It’s instinctual for us to want to close ourselves up to protect our organs when scared. But you can’t ski well or difficult terrain without opening your body up, relax your arms and expose your vulnerability. This is especially scary at first when tree skiing. When you do relax it’s amazing.

I think this is why some like snowboarding more. You’re facing sideways and you feel less exposed. You can get away with being closed up a little more and while it’s bad form, you never really have to snap out of it the way way you do on Skis.


"just" practice


I think without fear, challenges to over come, ways to improve, there would be little point to living. Experience would be so bland?

The feeling over fear when you want to ask someone out, then you do it and it works out, that is the spice of experience. Like wise when it doesn’t work out, it can suck but it’s also part of the spice.


$70K?

The one pictured appears to be $14k

https://m.unitree.com/en/a1/


I think in humans it will be possible Matrix style as well


only if we can crack transferrable skill-memories. I think the current thinking is that each human have different neuron nets patterns though mostly in the same general regions


^^^This


I got the opposite conclusion from watching this video.

I feel the way the robots do it is similar to humans. Considering that's all emergency behaviours (not trained with animals' or humans' motion data) I believe that robots will exceed humans by far in few years.


I wouldn't say I'm particularly afraid of heights, yet my heart pounded while watching this.


I think extreme refers to current robot's abilities rather than comparing them to humans (which is apple vs orange as humans are biological robots).


Robot they're using is a Unitree A1: https://www.unitree.com/en/a1


What's the approx. price tag for one of those? (I'm not going to buy one, just curious)



Love ordering stuff from a website with no company information, no address, empty Support page. Totally not a scam.


Really useful information! Thanks!


Out of curiosity, is everyone of university age good at clickbait now?

Like the whole point of saying “extreme” parkour is to boost engagement from pedantic analytic people like us talking about the hyperbolic title choice


The mainstream, state broadcaster in my country publishes youtube videos of news topics with SENSATIONAL TITLES and thumbnails with close-up REACTION FACES. On their home page, along with the written articles.

I hate what the attention economy has become. Old man yells at emoji...


It is a bit disappointing. This video did not show anything more "extreme" than various Boston Dynamics videos from years ago. And to be even more pedantic, this is hardly parkour at all. Jumping over a gap and climbing on a box has little resemblance of what we came to expect when we hear this term.


I scanned the paper and, if I got it right, during training the robot gets "external" information about its world position and speed compared to pre-set waypoints, something that an animal or person wouldn't have access to. What I didn't understand for sure is if this information is also available to the robot while performing the "parkour": I'm pretty sure that it perceives the obstacles only through its depth camera, but how does it determine where it is and where it should go next? Is this still done through waypoints and global position knowledge? A joystick is mentioned for control; is that being used to set waypoints? Or feed the robot a relative direction and speed?


When the policy is deployed in the real world only the depth camera is used no waypoints etc.. Scan dots and target heading is used in the first Phase of the training to pretrain a policy in simulation. In Phase 2 a policy is trained end-to-end using the pretrained actor network: "First, exteroceptive information is only available in the form of depth images from a front-facing camera instead of scandots. Second, there is no expert to specify waypoints and target directions, these must be inferred from the visible terrain geometry." For policy training in Phase 2 DAgger which is based on Behavior Cloning is used (with the policy from Phase 1 as the expert), they also use some tricks to make sure no actions that are too different from the expert actions are executed during training. In Phase 2 the network learns to extract environment information instead of from the scan dots from the depth camera. Also in Phase 2 they use the pretrained actor network from Phase 1 but the depth embedding must be learned from scratch. This is how I understand it.


Thank you for your comment. What I don't understand is this: when the robot is in a new environment, how does it know where it's supposed to go? My understanding is that the training teaches the robot how to get to a position, but I didn't see anything about how to choose where to go (in "old AI" parlance, what could have been defined as planning).


Those movements are extremely animal-like, to the point of being unsettling (despite that I want a robot like that for myself)


I am always wondering, why they build such complicated biology inspired robots, while they can build something with wheels and simple levels which allows robot to jump over obstacles and turn upside down if it landed on the back.


My limited understanding is that wheels really don’t work for all terrain scenarios. For example, craggy mountains in Afghanistan. (Much of this research is funded directly and indirectly by the US DoD)


Its my assumption that biology inspired robots would adapt better to a biological world. It might also be easier to solve problems by copying those who have already figured out a solution.


Nature can’t really make a nice wheel and free wheeling hub though. Blood vessels would get ripped off, cartilage etc.


Nature does it own version of a "wheel", mainly in flagella: https://en.m.wikipedia.org/wiki/Rotating_locomotion_in_livin...


That’s very true but it doesn’t seem to scale up well from that. :)


My understanding is that such robots do exist and have a different application. For example, balancing on longer hind legs by lifting the rest of the body up, then making a jump over a gap is likely easier than when using wheels.

In terms of landing on the back: if I remember correctly, one of these prototypes--from Boston Dynamics?--is able to rotate its legs 180 degrees to push off the ground when landing on the back.


Somebody else is already doing simple wheels and levels. A lot of people/groups, actually. These and others are doing legs. I don't know if anyone is looking at crawling and jumping, or screws or tracks. Or tentacles. I wouldn't be surprised.

What would be really interesting if some way of motion turned out to be much better than what we find in nature or manufactured for non-autonomous machines.


Well, for one obvious reason, that's not terribly interesting from a scientific perspective.

For the other, legs are more versatile than wheels, especially when it comes to uneven terrain.


The best solution would be to add wheels as feet which gives you the best of both worlds. It does however add even more cost to an already outrageously priced robot.

https://spectrum.ieee.org/wheels-are-better-than-feet-for-le...


The robot that they used is 14k, down from boston Dynamics 75k, and they also have another which is available for 5k which you possibly might be able to do the same thing with it too.

Outragously expensive is not correct if you ask me.


There are robots / RC cars like that, but they wouldn't work well on rocky surfaces for example; legs are more versatile.



Gives us something tangible to compare.


This doesn't look like an extreme environment at all, yet the robot still has his legs sliding quite a bit on various obstacles, which surprised me since it can do complex stuff. They only show slow-motion on perfect moves, which is a bit disingenuous. I thought SOTA was better than this honestly.


I thought the opposite. Watching it scrabble to get its hind legs up on the ledge - exactly as a dog would do, with the benefit of millions of years of evolution behind it - was very impressive to me.


Is it just me or was I unimpressed that it can't long jump further than 2x its body length?


Here's a Boston Dynamics one that is less than a foot long, 11 pounds, and can jump 30 feet (easily from ground to roof of one story building)

https://www.youtube.com/watch?v=6b4ZZQkcNEo


The robot needs a speaker so it can yell "Parkour!" during each jump


I'm curious why the leg joints hinge towards the posterior, instead of away from the posterior like they do in dogs. Seems we would want to mimic the nature model as optimal, however does this imply the robot model performs better, in a way that somehow all quadrupeds didn't evolve to?

Or is it some nuance of metals vs carbon-based joints?


Dogs have more joints, one for the knee where the femur meets the tibia and one for the ankle where the tibia meets the tarsal bones. If you just have one single joint in the hind legs you probably want to be able to push off as easily as possible, so the lower one makes more sense.


This all seemed a lot cooler before that Black Mirror episode, now every time I see a cool video like this, I get scared.


Why doesn't it use existing momentum to get over that gap or up on the ledge? It slows down before it makes the jump


Looks like it slows down just a tiny bit (needs time to compute?) before making some of the jumps in this video. So not completely losing momentum. Wonder if this will look more fluid once it's able to reach a higher speed.


War of the Worlds (recent remake) is finally here!


That was my first thought on seeing this - it's eerily similar to the form factor of the Bob's.


What about the handstand? Was it programmed to do so or was it an artifact of the neural net? I wonder if this relates to the fact that some dogs do in fact perform "handstand" (pawstand?) in seemingly random situations...


Where did they buy this robot dog from? It looks like the Boston Dynamics one.


Unitree. I think it's equivalent to the Go1 which is ~$5k shipped (although I don't think that version gives you the SDK access needed to do these kinds of tricks). It's much smaller than Boston Dynamics' Spot.


It's the predecessor of Go1. If you buy the Pro/Edu version of Go1 then you get direct low level SDK control, though there are reverse engineering efforts that can do the same on the cheap ~5k version as well


Did anyone else find the stair handstand to look like cheating? Extremely neat to see this stuff.


one part i didn't fully get is if, after the training, the machine can calculate and implemente the same maneuvers while "offline" (not connected with any other device to assist with the navigation) and what would be the computing hardware requirements for smooth navigation without external help or update? (for example, guiding it with a joystick or GPS waypoint on rugged terrain and having it automatically adjust to each unforeseen obstacle in the way)


there ought to be plenty of money in this for delivering explosive payloads...

(Edit: although come to think of it, in the animal world, jump height is much more constrained by 9,81 m/s2 than by body size)


Physics constrains jump height to be about a few meters regardless of size. The basic problem is that the total energy required scales with mass, but the total energy available also scales with mass. The two end up cancelling out and you get a maximum jump height that is independent of size.

What really matters is the energy density of your storage medium, which is the same to an order of magnitude for any system based on chemical energy. So both a flea and an elephant (along with battery-powered robots) can jump about a meter or two in height.


With special designs you can build up energy over time and then release it all at once, to reach sporadic super jumps. Example:

https://www.youtube.com/watch?v=mvHXwTa5-DA

(But for all but extreme designs I fully agree with you)


Artillery exists.

Delivering explosive payloads is a solved problem. Detecting humans is a solved problem. The hard part is reliably distinguishing enemy solders from friendly soldiers or civilians.


Artillery is surprisingly inefficient.

It takes about 15 rounds of 155mm artillery per battlefield casualty. Each shot costs about $5k.

Robots like this might cost the same as a single artillery shot but have the ability to get multiple kills each.

It’s a cold arithmetic, but I guarantee you someone at DARPA is thinking and planning the future of warfare in these terms.


I thought you must be wrong - $5K for a dumb arty round?

But some googling indicates this is about right, and prices have even gone up since the Russian invasion.


I don't have real numbers for anything, but I tend to do analysis like the above quite often when evaluating business models. It's usually within a factor of 2x to 10x of being correct, so please don't take it as anything more than that:

A typical round weighs around 40 kilos or 100lbs. For reference, from a quick web search, as commodities:

- 100lbs of TNT would be around $500

- 100lbs of stainless steel is around $250

I'm not sure what materials go in (I'd assume better explosives and worse steel), but I think $250-$500 is a good estimate for price of raw commodities (not including anything beyond that).

I would at least double that for having those shaped into an artillery shell, and add the cost of non-commodities (like a detonator), and you're easily at $1000. I would at least double that for the raw cost of distribution, sales, shipping, and logistics, and other overhead. We're probably at $2000.

At that point, toss in 30% profit margin for everyone along the way (30% to manufacturer, 30% to distributor, etc.). You're probably now around $3500.

I think that's a bare minimum baseline for what it would take to get a very, very dumb shell. This goes up if you want:

- Anything at all fancy or high-tech

- War profits

- NREs covered (building a factory, which might be idle 95% of the time and spin up for wartime)

- Military inefficiencies

- Regulatory compliance, quality inspection, etc.

Etc.

You can hit $5k very easily.


That's a great analysis. I didn't think about the sheer quantities of materials needed.


As does counter-artillery. Fighting against a technologically-savvy opponent, you get off one shot and you're dead.


It's possible to know where the humans are but be unable to destroy them with commonly available artillery payloads due to them being well dug in with overhead protection.

These could swarm in the back door.


I don't know, loitering munitions are pretty much bringing a renaissance to the field of blowing people up.


Drones are a solution to the targeting problem: the operator watches through a live video link, identifies the target, then deploys weapons. (Drops grenade/launches Hellfire/navigates into the enemy)

This is why, despite being technically feasible, we haven't seen drone swarms yet. You can easily build 1000 drones, but you can't have 1000 channels of live video over radio back to 1000 drone operators.


nice. now send them to Ukraine so they can clear out Russian trenches


As a reminder: CMU and its robotics program is intimately associated with the US Department of Defense, and although there are certainly civilian applications for this technology, one of the most likely short-term applications is improving legged drones for battlefield deployment (often, it should be said, in humanitarian roles, including search and rescue, or support roles, such as ammo, food, and water resupply, but deployment of armed legged drones is also an active area of research).

Here's a military news article covering the same umbrella grant program this research was funded under: https://www.c4isrnet.com/2022/06/23/darpa-adding-common-sens...


> CMU and its robotics program is intimately associated with the US Department of Defense

Is this not the case with all the major R1 institutions?

Walt Rostow may no longer be deciding on troop levels from his office in the MIT Economics department (that cause MIT to "divest" its overtly military work into figleaf organizations) but still, after that, ARPA (later DARPA) poured hundreds of millions into the institution.

They basically paid for my education there and I am not a US citizen and never worked on anything military-related, either while in school or since.


I can't really speak for many other institutions. I know plenty of robotics labs that are doing completely civilian work under NSF grants, grants from corporations, some civilian manufacturing and automation work through the national labs, etc. And even among those institutions working on DARPA-funded projects, not every institution has NREC right down the street working on autonomy systems for ground combat vehicles: https://www.nrec.ri.cmu.edu/solutions/defense/other-projects.... The research done by students at CMU directly makes these types of weapons platforms possible, and vice versa.

It's certainly true that many, many universities are deeply embedded with the DoD, defense contractors, and other weapons manufacturer ecosystems. Certainly CMU isn't exceptional here. But I think it's very important to keep in mind the reality of these programs when they come up in the news, like this article, which is why I left the link to how "the other side" sees these types of research programs.


Glad to see our universities making progress in this important area of national security.


This tech is the stuff of nightmares. People are going to see this unfeeling little chitinous bug-dog-thing crawling in with them, then they die. And it is making it cheaper to kill people in exotic foreign lands.

This is the tragedy of the commons. I'd rather see nobody making these advances, but if someone has to it'd better be people on my side. Can't stop progress :(


Maybe there's a glimmer of hope that someday both sides can send their robots to destroy one another, instead of their sons and daughters? I agree that initially there's an advantage to richer nations, but it's not like a $30M jet fighter, there could be parity.


That's not how it's going to work. There's no point in taking out a relatively cheap robot that can be replaced in a day, when you can take out a person that will take 20 years to be replaced.


But the person you're gonna take out is more likely to be a senior officer, rather than a 18 years old private.


What a naive statement. Do you really believe no other nation is capable of making such a thing? This doesn’t improve our national security at all. Just ups the ante for robotic infantry, whatever the hell that ends up meaning. Like advancements in war technology has ever been a positive in the world.


> Like advancements in war technology has ever been a positive in the world

Yeah, screw GPS, trauma surgery improvements and better prosthetics, IFF systems, radar/sonar, Internet, ToR, Epipens, and duct tape. /s

Really, I get having a generally anti-war position and wanting to spend money on other shit but pretending like there are no positive externalities to defense spending is childish.


Unfortunately war is kind of a Red Queen scenario where the technology advances everywhere.


> This doesn’t improve our national security at all.

You said yourself that other nations can and will make this too. Avoiding a decrease of national security is the same as improving national security.


This topic is somehow explored in the movie The Creator

There’s a lot of wholes in the plot, but the overall premise and scenario seem eerily plausible


How does this improve national security?


They are vehicles that can navigate terrain a wheeled vehicle cannot. So they could do anything from operate as pack "animals" for infiltration teams to be armed drones that creep along the ground, harder to detect than a flying one.


Once again, how does this improve national security?

That's offense not defense. It worsens the national security of your opponent but doesn't strengthen your own.


That seems like a rather limited view of the word "defense", more appropriate to football or hockey than warfare. Even for that you could have these drones patrolling the territory/shooting intruders.

But the word "defense" is much wider than that. Is the US supplying offensive weapons to Ukraine contributing to the national defense? Even if you feel the US should not be doing that, I hoe you can see the logic of people who do frame it that way.

And Ukraine's in a defensive war against an invader, and for that they have to go on the offense.

(Of course the US renaming the Department of War to Department of Defense in 1947 was 100% propaganda, or to be more charitable, aspirational. There is no question that it has been used offensively).


>But the word "defense" is much wider than that. Is the US supplying offensive weapons to Ukraine contributing to the national defense?

In a war the line between offense and defense is blurred but the national security is at level zero.


> Is the US supplying offensive weapons to Ukraine contributing to the national defense?

It's not. It's contributing to European security, and as such is necessary for the US to reaffirm their suzerainty over Europe, but it's not really about US defense and even less about National Security. (I'm glad they do btw, because we European powers would have left Ukraine fall after deciding it would be too expensive to help them…)


European security is national security via geopolitics. Being a global superpower has many benefits for US citizens.


> European security is national security via geopolitics.

If you consider “national security” to be a meaningless buzzword and not an actual concept, you can say that. It's as accurate as saying “US sports results at the Olympics is a national security issue via soft power”.

Pretty much everything is “national security” by that standards.

> Being a global superpower has many benefits for US citizens.

Sure, but most of them don't have anything to do with national security.


What do you think defense actually is in the current technological era?

You don't defend territory with static defense's or force fields or walls.


Yeah, but do you think these technology can't be copied by others?

Drones and robots made attacks easier, did that help national security?

Countries like Russia or China won't attack because of nuclear weapons but groups like ISIS don't care.


> Drones and robots made attacks easier, did that help national security?

Large flying drones make patrolling a border easier. You don't have to worry about the pilot's fatigue level. Only fuel/battery.

The war in Ukraine clearly shows that smaller drones also help in attacking and defending. Attacking, by dropping bombs or spotting targets. And defending, via spotting incoming enemy forces; spotting enemy artillery that is shooting at you, etc


> Drones and robots made attacks easier, did that help national security?

Yes because the enemy will have them eventually regardless of whether you develop them. Call it a Prisoner's Dilemma if you will, but it's the reality. Besides, unlike nuclear weapons, warfare with autonomous systems is not particularly more cruel than WW1 or WW2 style warfare.


There is little difference between offensive and defensive capabilities. It's all about destroying your opponent. This helps national security because it is an additional and advanced means of destruction that the enemy might not have.


How does it not?

Consider terrain such as the mountain ranges along the side of California. Or the entirety of Japan.

Wheeled and tracked vehicles generally have problems in that terrain.

Legged robots don't. Suddenly your troops intended to operate in mountainous terrain have a reliable robot donkey to use as a packmule.

Many countries (US, Germany, France, Italy, etc) still use horses and donkeys for transport in bad terrain.


There is certain terrain legged robot will struggle with too


You say that like it is a bad thing. :-)

Fun fact, in the mid-80's early 90's the "industry" was "getting ahead" of the Department of Defense in terms of capabilities that could be deployed in an adversarial way. One of the most visible outcomes was the first crypto wars. (source code became ITAR controlled, emailing a perl script that implemented the RSA algorithm to someone outside the country was an ITAR violation, and we joking suggested illegal immigrants get that code tattooed on their body so that it would be illegal to deport them without a license from the Dept. Of Commerce)

Getting an inertial navigation unit w/sensors that weighed less than 10kg (22 lbs) was code word level secret stuff, because beyond the line of sight missiles are a thing.

Anyway. the US DoD (like the defense departments everywhere) realized they needed to be more engaged with R&D labs if only to see what progress they were making so that they could anticipate whether or not that progress might show up as a threat in some way. On the plus side that freed up some money that would have been in black budgets for universities that were doing similar research anyway and as a funder, the source, in this case the DoD, generally gets a non-exclusive perpetual right to use any resulting IP out of that sort of funding arrangement, even when patented.

Bottom line, researchers are gonna research, so make sure what the future holds is not gonna show up unexpectedly on the battlefield or in other covert ways.

All that said, as someone who has been involved in building this "scale" of robot (under 1m, self powered) the progress has been freakin' unbelievable. In part because of the fact that you can put a supercomputer on one that weighs less than 500 g and runs for 8 hours on a battery that weighs less than 1 kg. Sensors, video, real time image processing. All pretty stunning.

The Ukranian use of off the shelf drones, and their rapid development of weaponized "drone munitions" tells me that the DoD is correct in wanting to keep track of what is going on with robots that can do "extreme parkour."


Ironic comment given the implied nationalities of the coauthors (Xuxin Cheng, Kexin Shi, Ananye Agarwal, and Deepak Pathak) and the chosen robot form factor (Unitree)


Is it? I think it's important to understand that this type of robotics research requires serious ethical considerations no matter where it happens or who works on it—this work specifically was funded by the US military, but it applies just as equally to weaponization projects happening within China.


Does the Chinese military sponsor education for US Citizens on military-adjacent projects in China? Why (not)?


I assume some combination of: they don’t have the same incentives to try and draw in international talent as the multicultural US does, and US k-12 schools don’t produce students that would do well on their admissions tests or whatever. Also we have pretty good engineering schools here in the US, so I don’t see why anyone would take them up on that offer if they decided to make it.


This isn't the only time someone has written "armed legged" on the Internet, but according to Google it's definitely one of not many.


Good. I want the best minds in our country working on getting our boys out of harms way and robots doing the hard, dangerous and dirty work.


The Internet and UNIX systems supporting this comment, only exist thanks to US military money.


Which is great?


not sure what you trying to get, but hundreds if not thousands of universities are working closely with DARPA, which is an DoD agency that sponsors mostly high-level and non-classified researches. Many of the researches have weak connections to actual military usages (there has to be _some_ connections, but many of them are pretty weak).


Only thing missing in that video is some rando walking by on campus in the background yelling “when they come, I hope they come for you first!”


Real feedback via proprioception and hand-eye coordination are so much more powerful that closed control movement planning.


I need that Go1 support asap. Time for my little Unitree friend to evolve and keep up with its GPT4 brain…


Horrifying


Quiet actuators, a proboscis, and a bit of ricin, and the world waits.


When can I have one of these to patrol around my house ?


Today? But the bigger version is around the price of a higher-end car.


Can it unpack the dishwasher ?


One step closer to ED-209.


Bad boys bad boys, watcha gonna do when they come for you?


that's basically a poodle!


I was staying in Whitechapel, London recently. There are many religious folks there who are horrified by dogs, don't want dogs to touch them, and shun dogs and make a huge berth for them on the street and inside, like in buses.

I was on dog walk duty and always held the lead close, but I knew they were worried I wasn't so conscientious and would let my dog friend run amuk on their garments.

So I was trying to imagine an example of a creature that would horrify me similarly. Like perhaps if people had giant BEETLES as pets, and thought it was all grand, and I was like "what the FUCK are you on, my friend?", and I was pretty happy with that.

But I think a closer cut to how disconnected this is from what seems appropriate, and normal, and reassuring, would be these robots. Clearly, some people don't really understand how horrifying this is, particularly with elevator music as the background. And if you want to know how upsetting the elevator music is as a background to set tone, I strongly urge you NOT NOT NOT to google "crab club" on youtube.


  "The Hound half rose in its kennel and looked at him with green-blue neon light flickering in its suddenly activated eyebulbs. It growled again."


  "THEY sent A SLAMHOUND on Turner's trail in New Delhi, slotted it to his pheromones and the color of his hair. It caught up with him on a street called Chandni Chauk and came scrambling for his rented BMW through a forest of bare brown legs and pedicab tires. Its core was a kilogram of recrystallized hexogene and flaked TNT. He didn't see it coming. The last he saw of India was the pink stucco facade of a place called the Khush-Oil Hotel."


-

  "Ng Security Industries Semi-Autonomous Guard Unit #A-367 lives in a pleasant black-and-white Metaverse where porterhouse steaks grow on trees, dangling at head level from low branches, and blood-drenched Frisbees fly through the crisp, cool air for no reason at all, until you catch them."


This gives me Black Mirror-fueled nightmares... not least because those videos suggest that a Black Mirror scenario may not be that far off and what probably saves us for now is battery capacity.


"More Black Mirror than Mirror's Edge" was my takeaway too. In terms of Parkour this is slow and awkward, but it's like watching a baby taking their first steps towards murder. The most important hurdle for many soldiers is the one these machines never needed the help of AI to get over. Even after dehumanizing the enemy many soldiers are "non-firers" and won't shoot another person even if their own life is in danger, while those who kill children and civilians are often haunted by that act.

These robots won't have any problems pulling the trigger at anyone or anything. They won't hesitate or refuse an order the way Stanislav Petrov did. They won't be trained to have empathy or respect for life. Humanity is the final check against the worst atrocities war demands from the people we send out to kill each other, and devices like these solve that "problem" by dehumanizing the soldier.


Don’t worry, we’ll learn how to kill the fuck out of robots too…


Ideally, the panacea to such nightmares is democracy. In practice, history shows that greedy people with power often don't understand anything other than force (sadly).


> Ideally, the panacea to such nightmares is democracy.

Come on man, this is delusional. I want to believe it too but be realistic.


Idk, it really depends on how strong your affiliated lobby group is. The average voter competes against the Havard Kennedy or Oxford etc. graduate-led super PACs, lobby groups, whatever. So yes, it gets easier to believe "democracy" doesn't work or exist. However, lobby groups and elected officials wouldn't be a thing if we (society) didn't have the impetus, burden, or requirement to collectively believe (or have others do) that a thing such as democracy does exist. So if someone has a requirement to make you believe something, you have to ask why? In most cases you can acquire some kind of resolution using that.


What's the Black Mirror scenario?


General dystopian outcomes. After world powers harness robotic armies, revolts will no longer be possible and citizens will be permanently enslaved and must succumb to whatever oppressions are put on them.


Why is walking the salient technology that will enable this? Why not wheels, or treads, or flying?


Psychologically because it's life-like (uncanny valley).

Practically, because legs are superior to wheels when it comes to pursue a human target. It may be superior to flying as well indoor, to open doors, etc. And then there is the energy expenditure and additional mass constraint when airborne.

Research on legged robots is not just for 'fun', it's because they are better at going through obstacles and rough terrain than wheeled vehicles.


Part of it is because we already have flying/wheeling drones. Walking is just the last step of all terrain capabilities.

Flying can get you almost anywhere, but not quite. It is both very loud and has issues in enclosed spaces.

Wheels/tracks are very efficient at moving mass, but have issues crossing some materials such as exploded building internals like you would find in a war zone.

Walking/climbing is the last bit to get you inside enclosed 'human' spaces such as buildings and tunnels.

The war occurring now in Gaza would be an example area of what future battle would look like.... A target building it hit by a bomb and mostly destroyed. Smaller automated drones quickly fly over the area and assess visible targets and threats for follow up artillery. Multiple mobilized traced/wheeled heavily armored drones would address any residual ground level threats. Upon reaching the target building the larger drone vehicles would release multiple smaller 'walking/crawling' style drones to ascend the rubble, either addressing any remaining targets themselves, or reporting back to a command station for some other group to make a decision on what to do next. The final goal of these walking units is to probe deep into the rubble to ascertain if any tunnels survived the initial bombing, and if so to begin mapping any threats in said tunnels.


It's not necessarily walking, just the improvement in mobility which is one precursor to self-sufficient policing


Or obstacle avoidance and hints of advancing capabilities for learning and adapting? (Apparent) autonomy in general?


Sometimes when I watch videos like this I really wonder what the future is supposed to look like , heavily regulated I guess? All out chaos? Imagine what a criminal organization could do with a bunch of robots that are this agile? It's hard to imagine.

Personally,I think we'll get really good at killing robots and they'll get better at killing people (military).


Oh, right. There is a Black Mirror episode in which dog-like AI robots are hunting down humans [1]

If you watch it then those research legged robots will look rather more sinister... or maybe it's just me. That episode really stuck with me.

[1] https://en.wikipedia.org/wiki/Metalhead_(Black_Mirror)


They already do, really, but this is just a method of locomotion. If they want to hunt, they can hunt even if they're drones, or have wheels.


Iirc the robots are the equivalent to land mines, continuing to cause harm after the war is over. That episode felt very plausible and horrifying.



Oh this is automatically reposted from the second chance pool, I originally posted it yesterday before that other story. It's a shame it isn't getting more attention. IMO it deserves to be at the top of HN for hours. This research represents how the methods in robotics are changing quickly. I think people underestimate how capable robots will be in the near future.

Also the comments on that other story are just sad. I understand war is in the news but I hope it's not all we can think about! Also I'm surprised by all the people who think this is not impressive because "a 10 year old could do this" or whatever, that's actually ridiculous.

I guess HN just isn't the right community to appreciate real hard technology like this anymore. This makes me sad, especially because I don't know of anywhere else that's better.


It was a collision between the SCP (https://news.ycombinator.com/item?id=26998308) and a regular submission, but as this one is on the front page now, we'll merge the comments hither.


Shared this few days ago but didn't get any traction then https://news.ycombinator.com/item?id=37690384


It walks into your room, discovers you, and moves in to make the kill, but for some reason the gun trained at an odd angle. You see it take two shots that veer across to the corner of the room before it hilariously enters some self-correcting routine, seems to rediscover you, and finally blows your head off.

You get a chuckle before you die


You better have your own robots protecting you.


The only protection from a bad guy with a robot is a good guy with a time machine and a shotgun.


Wait, who was the bad guy with a robot? I thought there were only bad robots.


Or become undetectable.


"It turns out that our kill-bots don't shoot at mimes for some reason?"


"They're not human."


Pretty unimpressive given recent advances in AI


How would you recommend cutting edge robotics research to approach this instead?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: