Hacker News new | past | comments | ask | show | jobs | submit login

You know... I've been working in an interesting combined AgTech and Aviation (drones, but big ones) for a while now and using JSON over Websockets for IPC is one of the best decisions we've made. We don't use it for everything, mind you, there's lower-level protocols that we use to talk to embedded hardware devices, but when we can we do. And while it's a draft standard, we basically riffed on a variant of this for most of it: https://datatracker.ietf.org/doc/html/draft-inadarei-api-hea...

The reason I love it so much is that it's just so straightforward to make server or client that can talk to it. All of our embedded Linux systems are written in C++ right now and they have absolutely no problem publishing and consuming messages in our standard format. One of the original driving factors for this is that we do have some web-based and Electron-based UIs and any protocol that we made that wasn't HTTP-based or Websocket-based would require them to do twice as much work: first, connecting to whatever service from a "backend" server and implementing whatever protocol it needed, and second exposing that backend service to the frontend over a Websocket (generally... since it needed live updates). By standardizing on our in-flight services just exposing everything as Websockets natively we pretty much eliminated a whole tier of complicated logic. The frontends have a single generic piece of code that has standardized reconnect/timeout/etc logic in it, and the backends just have to #include <WSServer.h> and instantiate an object to be able to publish to listeners.

I definitely didn't start there. And I 100% understand where your opinion comes from... from so many different angles a lot of the "modern" web systems shouldn't come within a mile of a safety critical system. Websockets though? They're great! And while JSON isn't necessarily the most efficient encoding, it sure does make debugging easy. We run everything on a closed network that usually doesn't have an Internet connection, so we don't run TLS in between the ground and air systems. If we need to figure out what's going on and an interface is acting up, we can just tcpdump it and have human-readable traffic to inspect.

The flight critical stuff is isolated from all of this and spits out a serial telemetry feed (Mavlink). We do send that directly to the ground station over a dedicated radio, but we also have an airborne service that cooks that into Websockets and in many cases the Websocket-over-very-special-WiFi connection has been more robust than the 915MHz serial link.

And it's not as if existing protocols like NMEA are all that good either.




Thanks for sharing that! Very interesting. There is even less margin for error in air compared to at sea. At least you can still float if the power goes out and at that point a sewing machine is all you need for most critical problems past that point.


We've actually leaned in pretty hard on using "standard" protocols as much as we can:

- We have a flight planning module that takes multiple polygons as input and returns a (large) list of waypoints for covering the regions that the polygons cover. When I was trying to work out the request/response format I decided to use GeoJSON with some extra properties added. You submit the GeoJSON boundaries with a POST request, the planner does a bunch of computational geometry and graph algorithms, and returns back a GeoJSON. If you want to, you can just load the flight plan up in QGIS or ArcGIS or whatever and inspect it directly.

- We also accumulate quite a bit of geospatial data that we need to post-process. We used SQLite with the Spatialite extension to store that. Same story as the flight plans... you can really easily load it into QGIS or Geopandas or whatever you want and do your analysis

- We need to stream video down to the ground station and ended up using RTSP, h.264, and GStreamer to do that. You can connect to the video feed using our ground station software if you want, but you can also just connect to it using VLC. And internally this meant that if we wanted to do hardware-accelerated encoding it was just a matter of changing the GStreamer pipeline. Or... if I get my way over the next month or so, we'll be adding a HUD with extra telemetry right into the video feed, again using GStreamer plugins.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: