I’m not familiar with the specifics latencies for drone racing but hierarchical planning/control is very very common in robotics.
It’s natural to say plan a longer term trajectory at 1Hz, a short-term trajectory at 10Hz, and perform control at the 100Hz, obviously with those rates varying based on the system.
And since (generally speaking) higher levels of planning require more compute resources, it might make sense to partition the compute and take the latency penalty.
Makes sense in general. In the specific case of these racers, it seems you wouldn't be losing out on much by keeping everything local, though?
That said, I'm still mostly curious on how long it would take you to get the sensory data off the drone. My gut is that the video transmission done would blow through most of the time budgets you listed. I'd love to see a good rundown on the relevant latency values involved.
Good high-resolution specialized video-codec based (H.264 / H.265) digital FPV video systems like Walksnail or DJI hover at around 30ms of latency glass-to-glass.
Analog systems used in racing are at around 15ms glass-to-glass, with most latency coming from the camera's image processing, but would be a bit non-standard to train or build a system around without transforming back into the framebuffer/pixel-value domain, which would introduce some slight degree of extra latency.
There's a rather unique uncompressed transmission system called HDZero which roughly matches analog latency.
Purpose-built low latency uplinks add 5-10ms to get data turned around and back into the flight controller, albeit at fairly low bitrates as they're mostly serial based.
So, for realtime control (kinematics), which runs at the multi-kHz rate on racing drones, onboard control is pretty much a must, but for both short-term and long-term planning, offboard control becomes more practical.
Most non-FPV drones are already architected this way, with gyro-to-motor PID control performed on a microcontroller running an RTOS, short-term planning information coming in asynchronously from a larger SoC running Linux and ML-type stuff, and long-term control information coming down over the air.
Thanks! This is really cool stuff. I'll start digging online looking at some of this stuff. Curious what the resolution constraints are for this video. I'm assuming for cinema purposes, they have separate control video to keep things fast?
And I now have to convince myself that i don't, in fact, need to physically play with one of these things. :D
Right. DJI's FPV video link has around 60mbit capacity, 50mbit used for video in stock form. I think Walksnail is similar.
For FPV/"freestyle" cinema and even FPV YouTube videos, the onboard cinema camera will be totally separate from the video link. In the case of FPV, this will be "cinelifter" FPV drones which lift RED/Arrai/BlackMagic cameras, or "freestyle" FPV drones with a GoPro.
For "camera" style drones, a separate gimbal camera from the "main" flight camera will be employed. These often have real-time capability, but with downscaling and a higher-latency / lower-resolution video downlink, like DJI's Zenmuse series found on the Matrice and Inspire drones. On these drones, you basically pick and choose what's sent to your display given the downlink limitations, but you also have a much higher latency tolerance as they can fly themselves.
It’s natural to say plan a longer term trajectory at 1Hz, a short-term trajectory at 10Hz, and perform control at the 100Hz, obviously with those rates varying based on the system.
And since (generally speaking) higher levels of planning require more compute resources, it might make sense to partition the compute and take the latency penalty.