Hacker News new | past | comments | ask | show | jobs | submit login
Mars helicopter employs advanced control techniques to survive in-flight anomaly (control.com)
210 points by dougmany on June 18, 2021 | hide | past | favorite | 97 comments



I see no mention of "advanced control techniques"? Sounds like there is just a limit on roll/pitch angles and a limit on distance applied. Saying "advanced control techniques [for multirotors]" sets an expectation for something along the lines of Tedrake's Underactuated Control approaches: http://underactuated.mit.edu/


Sensor fusion is part of the control system. Sensor fusion that incorporates visual information is outside of the typical classical controls sphere and is likely considered part of an advanced controls segment of the devices firmware as opposed to the bog standard deterministic controls algorithm. You need not gatekeep here, this is a real term.


I don't think this is gate keeping. Regardless of the terminology used within the field, the intended audience of this pop article will interpret "advanced" to mean actually unusual or at the cutting edge. Relying on the public's misinterpretation of jargon ought to be criticized.


> ... intended audience of this pop article ...

Are we reading the same page? I see datasheets linked in the navbar, an article about an EtherCan box, and white papers about encoders. If any site is then certainly this one has a very specialist audience in mind.

Besides i believe what constitutes "advanced" gets wider rather than narrower when you move towards a less specialised audience. To the generic population even two stacked PID loops would qualify as "advanced" while practitioners would call that "conservative" or "old-fashioned".


In the motion control and process control industries deterministic systems or SISO systems like PID are typically considered in what someone would just call 'controls'. Advanced process control is an actual industry three-letter acronym term relating to 'meta-controls' where the controller is itself controlled by an outside system like the supply chain or a plant tuning system, etc. This is the point I was trying to make - i'm not making a judgement call on what is or is not sufficiently 'advanced' only pointing out that this term has a meaning that the parent post I was replying to might not have understood.


> Sensor fusion that incorporates visual information is outside of the typical classical controls sphere and is likely considered part of an advanced controls segment of the devices firmware as opposed to the bog standard deterministic controls algorithm.

Not so advanced as very correctly pointed out by https://news.ycombinator.com/item?id=27554949

Motor-attitude control loop cannot, and should not be "fused" with navigation layer, whether INS, or optical flow sensor.

The bog standard deterministic controls algorithm here is "bog standard superior." Computer vision/optical flow sensors have much lower precision than gyroscopes, and accelerometers, let alone aerospace grade ones.


Do any of these commenters have experience designing small "helicopter" "drone" avionics systems using off-the-shelf hardware, as in Ingenuity? For instance, you mention "aerospace grade" inertial sensors, yet Ingenuity apparently uses COTS components in that role^[1].

There are smart people on HN, but unless they have actual professional experience designing similar vehicles, I'm inclined to give the JPL folks the benefit of the doubt. It's also worth considering that they are the only team that has ever flown such a vehicle on another planet, which imposes constraints that even other experts may not be familiar with; that they were working with a limited budget, and making heavy use of existing open-source code; and that, generally speaking, we weren't in the room.

On the other hand, maybe they really are incompetents whose avionics system is "designed wrong", and who do need these things explained to them so that they can learn faster, as the commenter you linked so delightfully put it. In that view, I guess their groundbreaking achievement is probably more dumb luck than anything else--maybe JPL should do some firing and hiring so that things don't turn out worse next time!

^[1] https://spectrum.ieee.org/automaton/aerospace/robotic-explor...


Yes actually, any hobbyist who has done anything with drones and localization via optical flow positioning understands the problem.

The commentator is correct. And saying something is done wrong is not the same as saying someone is incompetent.

While I understand you're trying to point out hubris, you're swaying the other way and appealing to authority.

See also: https://news.ycombinator.com/item?id=27555761


> While I understand you're trying to point out hubris, you're swaying the other way and appealing to authority.

I'm appealing to the fact that they sent the thing to Mars and flew it. If you think "any hobbyist" could have trivially done a better job, which seems to be essentially what you're saying, there isn't much for me to say to you other than that I strongly disagree. And if I'm supposed to be the one who's appealing to authority here, I'm not sure what everybody else thinks they're doing.

I expect JPL tends toward having generalists working on projects like this, and I expect they do have things to learn. Nothing in my comment contradicts any of that. But the level of armchair quarterbacking in here is crazy and IMO largely unjustified.


They expected the camera not to drop frames. Their optical flow system tracked time independent of actual frame time. Anyone who've done optical flow or even anything with video recording on embedded devices knows these are known failure modes, and should be handled properly.

That is what's being discussed here, this is the failure mode in question. So yes, those experienced in optical flow and embedded control systems _would_ have handled this specific failure better.

But none of us are saying any hobbyist could have trivially sent it to Mars.

It should be obvious what we are specifically addressing: that this kind of error is not the sort that should happen in a properly thought out, bog standard real time system.

> JPL tends toward having generalists

And that's fine too, plus dumb mistakes always happen.

So it shouldn't be much surprise when the "armchair quarterback" responses show up, and rightly point out the issues. It would be more crazy not to be incredulous.


> They expected the camera not to drop frames [...] That is what's being discussed here

It isn't, actually. The comment I responded to, and the one it linked to, are broadly criticizing the sensor fusion/control loop implementation--the thing that saved Ingenuity after the optical flow subsystem started returning bad data. That's what's being discussed here, in the context of this subthread where I entered it.

The dropped frame thing is much more cut and dried, and it's obviously a bonehead mistake. It's also the kind of thing that's always going to happen, as you say. Many of the commenters here seem to be generally at odds with that reality--I don't believe the state of the art in software development has yet achieved guaranteed bug-free implementation of diverse real world applications, but you wouldn't think that from reading some of these comments (e.g. the one you linked above).


The article title is still misleading. The sensor fusion was the root cause of the malfunction, it was not the advanced control technique that was deployed to survive the anomaly.


My understanding from reading an earlier explanation from JPL was that it appeared the issue wasn't so much from the missing frame itself but rather the software not being tolerant of the frame count being off.

IIRC, it seemed more like a missing test case or scenario in the software validation suite might have been the ultimate root cause.


Agree. It's interesting that they don't treat the optical and gyroscopic systems as two separate sensors with the possibility to disregard one if it disagrees too much. A quick restart of the visual system would have been the most optimal solution #2020-hindsight


How would you know which one is wrong in this scenario?


This reminds me of the adage “ Never go to sea with two chronometers; take one or three” [1] to avoid this exact conundrum

[1] https://en.m.wikipedia.org/wiki/Triple_modular_redundancy#Ch...


Incidentally, I learned the hard way why you never run two instances of Consul (or any Raft powered stack), only 1 or 3. All sorts of wacky state can occur.


You have the navigation system as a third ‘sensor’, but in this case the sensor with an abnormal spike could be disregarded


> “You need not gatekeep here, this is a real term.”

Not my monkey; not my circus. But… is critique of terms “gatekeeping”? I understand this term in the context of ownership (ie. gatekeepers for books are publishers/printers who own the means of production.)

I greatly appreciate some level of debate of terms here on HN.


I wish they had gone into more detail. Unfortunately, my experience is that, yes, such limiters do count as "advanced."

First time I got my hands on a working mounted arm, I was cautioned again and again the need to run any new program in low-speed mode first. Because the arm had no limiting logic and would cheerfully power-bomb its own base with all the force and torque its motors could muster if I told it to.

Preventing that is still considered an "advanced technique."


Fusing video with IMU for navigation is called VIN (visual-inertial navigation) or VIO (visual-inertial odometry) and the field has made enormous progress over the last 10-15 years. It's the same technology that the iPhone uses for all its AR features.

Dropped frames are one of the easiest things to handle. Yes, the visual feature tracking depends on frames of video but even cheap phone IMUs these days are good enough to dead-reckon for a second or two, especially when embedded into a sensor fusion framework, so the prediction errors resulting from a single lost frame should be very minimal and not enough to throw off the tracking.

That's why I find it hard to believe that the VIN in use by the Mars Helicopter (part of a multi billion dollar program) wouldn't be able to deal with a dropped frame. It just doesn't add up. I suspect that the situation is much more complex than what the article suggests and that more things went wrong than just a dropped camera frame.


> It's the same technology that the iPhone uses for all its AR features

iPhone was not a first.

"Agent V" (2006) — a mobile AR game preinstalled on Nokia 3230 already has such technology.[0]

[0] https://news.ycombinator.com/item?id=20957157


1) He didn't say iPhone was first.

2) Was this something you even really needed to "Well, ackshully..." anyway?


> 1) He didn't say iPhone was first.

I'm not argue. I'm just saying.


It means that the people who know most about closed-loop vehicle control were not involved in the design of the copter. Such fragility is a really elementary design mistake any experienced engineer would not make. Certainly the vehicle that delivered the lander would not suffer from the same mistake.

My interpretion is that the copter, as an inessential system component, was seen as an opportunity for junior people to get some end-to-end experience. I hope they are learning the right things.


The article makes it sound like after the missed frame, every subsequent frame had the wrong timestamp and was processed wrong, so it sounds like more of a bug. If the subsequent timestamps had the right timestamps, and the calculation was based on delta timestamps, then it would basically basically interpolate over the missing frame and then recover. The fact that every subsequent frame was assumed to be 33ms in the past was the issue here.


And that being a system boundary thing, it’s easy to imagine it as an integration issue, where the controls side maybe assumed it would get a dummy frame or something.


For a real-time attitude-control feedback loop to have been subject to upset by such a bug is, exactly, the design flaw.


A quick search would have you know that the lead of the navigation for the copter is David S. Bayard, an awarded scientist in the control field in NASA.

So maybe the issue is not as simple as a news article portrays it


Then, apparently the real-time attitude-control feedback loop was delegated to somebody else less experienced. Navigation is a wholly different responsibility that may happen to rely on input from some of the same hardware.

On the other hand, scientists are often not as good at design as engineers. They are not routinely trained in it, and are expected to deduce it from first principles. (If you have seen code written by scientists, you will know what I mean.) When they err, it is typically by assuming that theory and practice are the same.


do you have ANY evidence to support this claim?

or are you relying on a third party article's interpretation and then extrapolating that even further?


If the failure mode is being described accurately then it's clear to those who worked on hardware that these are not the sort of problems that should be encountered this late into the cycle (on Mars). Perhaps we just need more info on the situation. But as it currently stands, things smell a bit amateurish.

See also: https://news.ycombinator.com/item?id=27555761


The flight instability is, all by itself, ironclad proof of a design flaw. When a constant time-offset navigational input can affect real-time attitude control, that is a rookie booboo.

The exact nature of the design flaw is subject to further investigation.


Here's a research paper that might provide more information: "Vision-Based Navigation for the NASA Mars Helicopter" https://sci-hub.se/10.2514/6.2019-1411

> This paper provides an overview of the Mars Helicopter navigation system, architecture, sensors, vision processing and state estimation algorithms.


I can't think of a polite way to say this, but as someone who professionally develops drone software, both of the software failures experienced by Ingenuity have been embarrassingly amateur at a technical level.

The first failure, which delayed the initial spin test, was described as a "watchdog timeout", which for anyone not familiar with embedded development basically means the code crashed. We all write code that crashes, but I am having trouble thinking of an excuse to justify the fact that their code crashed before takeoff, on Mars, and they didn't see it coming. There is nothing about sitting on the ground on Mars that shouldn't have been tested repeatedly on earth, and testing in production is _really_ not the right way to do Aerospace development (although Boeing Starliner would beg to differ)

Similarly, there are a huge number of things that can and will result in dropped frames when running Linux on a Qualcomm mobile chip, and having a software stack that infers frame timing purely from the sequence number is brittle, and would definitely not have passed code review and testing where I work (I actually checked, we do have a robust solution). If I had to guess, I suspect the root cause of the dropped frame wasn't actually anything exciting like a cosmic ray, but instead was some run-of-the-mill event that would have been caught by a couple hours of flight testing on Earth. Either way, it shouldn't have made it to Mars.

I'm sure that there are a lot of great engineers working on the Ingenuity project that _don't_ write these sorts of bugs, and am glad that theae amateur fuckups (barely) haven't crashed the drone before it has been able to do some incredible technology demonstration work.


From my perspective, I see a project that took years of prep. Multiple papers were written. It was tested on software sim, and NASA's physical space simulator[1]. And it finally _successfully_ flew on mars, with some minor bugs.

In my opinion assuming they were "testing in production" (production being mars!), or writing code that "definitely not have passed code review", or that they did not do couple of hours of flight testing on earth, is an unnecessarily unkind assessment of this project.

[1] Helicopter Models and Test Facilities: https://rotorcraft.arc.nasa.gov/Publications/files/Balaram_A...


I had a friend who quit their JPL software job (to go work for a large, bureaucratic software firm) because JPL required such an extensive testing and review regimen before every change that the work got boring for being too slow.

I am very skeptical of the claim that this code was not tested before launch.


Thanks for your interesting post. I wouldn't be surprised if you are correct, at least based on my experiences with pseudo-governmental software development. I've noticed that, as a percentage, there seem to be fewer folks who've developed the grizzled paranoia that comes from repeatedly shipping commercial software under unreasonable constraints.

As an amateur RC pilot familiar with some of the excellent RC flight control systems, it would have been a huge missed opportunity if JPL didn't invite some experienced engineers from the commercial and consumer drone community to provide input (QA folks too!). It's hard to imagine they wouldn't have gotten ample volunteers to spend a few days helping out.

I understand JPL is already designing a larger and more capable iteration. It would be cool if experienced drone flight control devs such as yourself dropped the team a note.


It might be related to the hostility of the environment. The chips aren't radiation proof, so it is expected to have some bit flops due to radiation.


Why on earth wouldn't they use rad-proofed chips on a planet closer to the sun, with virtually no atmosphere compared to earth?


Because it’s a vital part of the mission, using off the shelf components in Mars...

Of course they use rad-proofed chips in the rover. But using a newer Qualcomm processor instead of an ancient chip was a big part of the idea.


Um, it's not closer to the sun...


It’s actually further from the sun by around 100 million km.


Watchdogs don’t just mean crashes. They are useful specifically because they can be used to terminate non-crash conditions such as infinite loops where forward progress is not being made but the program is still running.


It is sad that Skydio wasn't more involved with helping in the creation of Ingenuity.


The article says the problem was a dropped frame from the camera, but that just further piques my curiosity:

Presumably they use some kind of Kalman Filter, but those are easy to program to account for missing frames, or frames at non-discrete timepoints, perhaps even for screwy camera images if the programmer had a reasonable prior for the likelihood of it happening. Kalman Filters by design account for measurement error.


It seems like the issue wasn’t that there was a dropped frame, it’s that the time slot for that frame got filled by the next frame, then every subsequent frame was off by one resulting in a persistent timestamp offset of the vision data from reality for the remainder of the flight.

I didn’t read into it too much so I may not have all the details right, but I think this is the gist of it.


That would be a straight-up, avoidable software/hardware bug: The incoming timestamp is incorrect, and garbage in is garbage out.

That would make me curious how the timestamp error occurred: software, hardware? Camera or Navigation code? I assume they have very high standards, what was the process failure point?


Or timezone bug! Always hard to test.


Thank you for your comment, because it triggered an interesting chain of thoughts about a semi-related problem I’m working on at work.

Usually with a Kalman filter, you’re taking into account the spatial measurement error (gyro-measured roll rate error, accelerometer-measured acceleration error, etc) but I don’t think I’ve ever encountered a system that explicitly modelled sensor latency variation relative to timestamps. Based on the description of the problem they encountered here, I suspect what happened is that it lost a frame but didn’t adjust the “photo timestamps” appropriately; every frame that came along afterwards would have had an incorrect timestamp? Even if the Kalman filter was set up to handle “this photo was taken 20ms ago” when doing its forward integration, if they didn’t model “this photo was taken 50ms ago but is reporting that it was taken 20ms ago” then you’d pretty readily get the kinds of oscillation they were getting.

Edit: yeah, just like the sibling comment said :)


HoloLens provides a timestamp with every frame from each sensor, which can be used for sensor fusion outside of the system usage. Windows.Perception.PerceptionTimestamp can be used for either recorded data (e.g. camera) or for future predictions (e.g. predicted device position). The predicted latency is also used to adjust the render to ensure the viewer's perspective is correct even though the draw calls may be lagging slightly behind the viewer's position.


NASA systems appear to have the property that they are both perfectly designed when HN commenters do not understand the code and amateurishly designed when they have errors.

This bathtub style curve for perception of NASA design by HN commenters makes me question if the perception correlates with reality.


You're simply coming to the astute observation that any individual HN commenter who may very well be a genius WRT an extremely narrow area of expertise will be just as hopelessly clueless as the average person when it comes to any area outside of their expertise.


Perfectly designed when it's right and badly designed when it's wrong, you mean? Strange way to state a tautology to make yet another boring statement on ye olde HN commenter.


No, actually, I do not mean that. I mean exactly what I said.


Terrible article - rehashed old news with a new title and even though it’s on “control.com” has no mentions of what said advanced control techniques are. Atleast it has pictures.


Article from the Mars Helicopter chief pilot: https://mars.nasa.gov/technology/helicopter/status/305/survi...


Time to set up a GPS constellation on Mars? Somehow forgot that this navigation aid -- that we take for granted on Earth -- is missing on this other planet.


Pretty sure SpaceX's first deliveries to Mars (orbit) will include just that. I think they could probably use Starlink satellites without much modification (most important one I can think of would be more panels to cope with the reduced solar irradiance)?

The GPS constellation has 32 satellites. A GPS satellite weighs ~2000kg. Starship should get 100-150 tons to Mars - the math checks out! (Disclosure: I got all my rocket science from sending Kerbals to their doom.)

So, MPS for Mars, LPS for Luna? Just imagine the amount and quality of rover footage we could get if each of them had (constant!) 1Gbps uplink to Earth...


> satellites without much modification (most important one I can think of would be more panels to cope with the reduced solar irradiance)?

atomic clocks


This got me interested so I brushed off my notes from grad school--GPS predecessors like the US Navy's Timation first used quartz oscillators, then graduated to atomic clocks in orbit. However, even now ground station atomic clocks issue periodic time corrections to satellites in orbit. Since we can get accurate time of flight to Mars, I imagine satellites without atomic clocks could be used, albeit with less stable pseudo-range estimates. This could be overcome by using more satellites, giving more favorable pseudo-range variances.


Are those actually necessary?

Not an expert, but the whole network is on communication (with perfect understanding of the satellites locations no less), shouldn't they be able to run an ntpd like protocol to stay in sync, and just sync ground clocks to "starling consensus time" instead of "real time".


There is a pretty decent description of the problem here: https://physicscentral.com/explore/writers/will.cfm

I'm also not an expert so I don't know if you can solve it with syncing. Maybe you can. But that doesn't sound like a "just" concept to me - any solution to this is going to be complicated.


It's funny/amazing to me how we lived without GPS for thousands of years, but it's become so integral to our society that it's one of the first things we're setting up there before visiting.


> The first and most moral responsibility of Freeland will be establishing a decent brewery on the new planet. (Civilization: Beyond Earth - Hutama)

Sure, our ancestors (who could make their way and thrive in the wild) would probably consider me (who gets lost the moment I make a wrong right turn and depends on the local supermarket) a complete idiot, especially since I couldn't recreate any of the technology my daily life so depends on... But I've got central heating and clean tap-water, so I'm happy with the exchange.

The higher we climb the ladder of technological dependency, the more calamitous it becomes should we ever fall off. Best not to look down. (I think I'm supposed to bang rocks together to make fire, right?)


>(I think I'm supposed to bang rocks together to make fire, right?)

Won't work very well unless you have flint specifically :-) Even if you know the techniques, starting fire without any manufactured tools or manufactured tools to make the devices is really really hard.


Its possible to just spin a dry stick against another dry stick with your hands. takes < 5 minutes to get a fire going.


As someone who did this sort of things in Boy Scouts, this is not super-simple just using natural materials. Maybe you're superman but starting fires with just natural materials was actually very difficult for our ancestors for a very long time. And is difficult today.


I'd invite you to watch the show Alone, where many survival experts have had to give up in the first day or two because they opted not to bring a fire steel and don't want to freeze.

You're right that it's possible, but how long it takes, and whether or not you can do it, are dependent on skill and environment. Wrong kind of wood? Screwed. Been raining for a day? Screwed. Sometimes this can be overcome, but you have to know what you're doing, and it can still take hours to generate enough heat to dry the wood until it's possible to friction combust it. Try it sometime, just bring a lighter with you.


A fire steel is probably the simplest low-tech way to start a fire. You still need some expertise like finding a mouse nest or other flammable material even if it's been reasonably dry.

But friction methods are tough, even if you have a well-made spindle and fireboard--which of course require tools. You're not easily creating those without tools to make those tools. And, even then, it's not super easy especially in a non-optimum environment.


The problem of determining position (specifically the longitude part; latitude is relatively easy) has been one of the most pressing technical problems in human history. See https://en.wikipedia.org/wiki/Longitude_rewards .

No kingdoms offered huge prizes for the first electric light, or the first antibiotic, the first radio, or the first computer. But geolocation has always been considered a big deal, and it arguably wasn't well and truly solved until the advent of satellite navigation. It'd be surprising if it weren't being planned for the next generation of Mars orbiters.


GPS is also “easy” to setup if you’re already coming in from space.

I wonder what the “useful minimum” for satellites would be if you wanted, say, relatively accurate positioning once every orbit or each day.


I get the humor, but if you think about it:

Almost all of the technologies that we would need for survival and productivity on Mars are recently-invented.


One of the main risks for early explorers was simply getting lost.


I think it would be an APS constellation, because Martian orbits like Areosynchronous get their prefix from the greek Ares.

That would be really cool, I wonder if it would be faster and more precise because there would be negligible human-produced radio noise to interfere and the Martian ionosphere is much thinner, or if it would be worse because the thinner and weaker atmosphere and magnetic field don't protect the receivers from solar noise.

I wonder if a rover could drop a few beacons for time-of-flight 2D triangulation, it's not like they're moving hundreds of miles away over the horizon.


But the G in the Global Positioning System doesn't stand for Geosynchronous...


Indeed satellite navigation sats aren’t even on geosynchronous orbits.


It might be enough for the rover to just drop a few radio beacons at various distances.


GPS doesn't have to be satellites. It can simply be beacons on the ground. Often used terrestrially to get a few more decimal points on a fix.


They could use UWB positioning systems. Accuracy is still not the best, but power consumption is very low and range more than adequate for a small base camp like the landing area. Solar power alone should be enough to power each beacon during Mars' daytime.

https://www.firaconsortium.org/discover/how-uwb-works


Not sure what techniques exactly you’re referring to, but the commonly used techniques for GPS enhancement (e.g. RTK [1]) are not simple distance measurements between GPS receivers and base stations but rather both stations measuring the satellite signal and using the difference of the signal in both locations to improve position estimates.

I imagine it’s possible to set up a ground-based network, but you would need a high density to cover large surfaces (you want to see at least four stations from every position). I also imagine that it would be difficult to get accurate vertical positions if the stations are all in the same horizontal plane.

[1]: https://en.wikipedia.org/wiki/Real-time_kinematic_positionin...


With the fuzzy GPS signal you had DGPS correcting it by broadcasting the current change from a known point. [0]

Before GPS LORAN [1] and it's versions was used relying on ground stations

[0]: https://en.m.wikipedia.org/wiki/Differential_GPS

[1]: https://en.m.wikipedia.org/wiki/LORAN


Yes, but wouldn't help with this particular problem. This was a roll/pitch issue, not fundamentally a position issue.


It stemmed from incorrect estimation of the position and thus of the velocity, which the helicopter attempted to correct.


> the inertial measurement unit (IMU) and the navigational camera. The IMU measures acceleration in three dimensions, using data from several sensors to estimate altitude, velocity, and position. Even though this system samples at 500 Hz, the error would accumulate over time, causing the helicopter to become lost quickly.

I understand the IMU is not an ideal input and integration over time leads to positional errors. But gyros are much better and the orientation of the drone in flight is paramount. What I wonder is what ‚advanced‘ control law allowed the drone to become unstable wrt. orientation when there was noisy positional input.


I suspect what happened is that the position and velocity estimates from the VIO system (cameras) were wrong (and perhaps wildly jumping) and since position and velocity errors are inputs to the attitude controller, the tilt oscillated as well. It's not that the IMU did a poor job estimating attitude (tilt), but rather that the attitude controller was asked to tilt in erratic directions to compensate for the erratic and incorrect position and velocity readings.

It's quite hard to detect when position and velocities are "obviously incorrect", especially when they come from VIO, where the optimization result can jump around in non ideal conditions, so I'm not surprised there was not a more graceful anomaly detection.


Can anyone hint why they wouldn't use a gyro?

When it's on land, they can make the gyro reliably point 'down'. Then at least during flight they know which way 'down' is.

Would this be too fragile for Mars?


When the article refers to an IMU, they’re referring to a combination of sensors; most commonly on Earth that’d be a 3-axis gyro, a 3-axis accelerometer, and a 3-axis magnetometer (compass). The problem with just using that is accumulated error and drift. On super small aircraft like this one, we usually use MEMS parts instead of big spinning physical gyros, and they don’t have great long-term performance.

MEMS Gyros measure angular rate, not absolute angle; to compute an actual angle, you’re taking the integral of the rate from t=0 to now. Any small errors in the measurements add up quickly to give you completely nonsensical results. For drones-on-Earth, we use a variation of the Kalman filter to combine short-term and long-term measurements. As an example, an accelerometer requires a double-integral to turn into position, so errors accumulate very quickly, but we can correct those errors using GPS. The accelerometer and its integrals give us really quick acceleration, velocity, and position updates (at, say 500hz), and then the GPS is used to correct the long-term position and velocity (at, say 5hz).


They have an inclinometer, so they know which way is down.


It’s cool to see how they compensated for the IMU inaccuracy. I bought an IMU off of Amazon once and tried to use it to measure the position of a steering wheel. This worked for one rotation, but as it mentions, it’s incredibly hard to compensate for the error margin — as you integrate more measurements, the error builds up irreconcilably to the point where it’s a useless instrument.

I am sure the one on the Mars helicopter had more precision though :)


I might be wrong, but I think the data from the accelerometers would be enough to know the position of the steering wheel without any accumulating error. Of course, the gyro data can be incorporated to improve the system.


The accelerometer would be fine until you take a corner. Then things would get exciting.


After five successful flights, the Mars Helicopter had a minor incident during its sixth voyage that was fixed using advanced control systems.


The control system is clearly designed wrong. Navigation input should not be able to affect the closed-loop control system directly. It should affect only the calibration, incrementally. If that had been done, there would have been no flight instability, just disagreement between the IMU and navigation about how far they had flown.

There are certainly people involved in the project who could have explained this to them. I hope they are learning fast.


> There are certainly people involved in the project who could have explained this to them. I hope they are learning fast.

This is how nearly all modern high-end quadcopter drones fly - navigation using optical flow camera sensors short circuited directly into attitude/motor control loops.

I guess there is not a small chance they entrusted helicopter autonomous operations programming to people with quadcopter background.


It is one thing to feed averaged relative motion into the control system, entirely another to feed in absolute position. The flight instability in response to navigational position error is incontrovertible proof of a mistaken design. Fixing the off-by-one coding error just papers over the design error. Testing clearly failed to detect the mistake.

I would not be at all surprised to learn that commercial quads share the design mistake. I was surprised to learn that NASA professionals copied it into a Mars probe. But, notably, not into the vehicle that delivered the lander.


Put tape over the downward facing camera on almost any quadcopter and you'll soon find out why...

It turns out sideways drift accumulates very quickly - and so quickly that unless you are a very practiced drone operator its very hard to compensate for by hand.

GPS compensates somewhat, but obviously that isn't available on mars (or indoors).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: