A few minutes dorkily waving the laptop around showed how the values changed, and I discovered the range differences were so straightforward I was able to compute the device orientation (which way is facing up) using a simple shell script!
Five (okay, maybe ten) minutes later, and X11 was happily rotating and inverting the display automatically as I flipped the thing around. (Seeing Chrome auto-flip exactly like an tablet does (albeit with nice animations) was neat.)
...with one caveat. Every single orientation change made the backlight kick off and back on again.
Tons of printk() debugging was insufficient to fully track down what was causing it. My "solution" was to neuter the bits of KMS that actually did the low-level DPMS calls - the result was some mildly scary display tearing in certain circumstances, but the backlight didn't flicker anymore.
Unfortunately, 'xset dpms force standby' (or 'suspend') no longer worked either - and nothing else that put the display to sleep via DPMS did either. Woops.
libdrm, KMS, DRM and X11 are a mess.
(As for what I was unable to get to the bottom of - something at the X11 level was deciding in certain circumstances that some rotates required "full GPU hard resets" and other rotates didn't. For example rotating left was fine, but rotating right was not. And inverting was fine but switching back to normal was not. But, get this, _if I had an external display attached_ (or had the VGA port forced-on), rotating left and right were both fine! Given that I was able to later reproduce this behavior on both an Intel and AMD system, this is why I glare at KMS/DRM and call them a mess.)
The code is pretty straightforward. It opens up a socket to the host, then for each motion update it creates a MotionData value, sets the properties on it, encodes it into JSON and sends it to the script running in Blender. It reads any data the host sends and discards it.
...my first thought was why JSON? I'd be curious to know the reasoning, since if I wanted to do this same task, I'd just send the values directly as binary --- 4-byte floats seem the natural choice here, since that's the representation both sides ultimately want. Also, this protocol is clearly unidirectional, so there's no need to even bother with the other direction.
It's a straightforward format that's widely adopted with easy-to-use libraries, and the data's tagged (with keys) so you get a bit of extra information when you're debugging.
It's a decent choice when you're building either a prototype, or not building something that's supposed to be extremely performance-oriented, in which case, I totally agree, a binary protocol would be more appropriate.
I know it may be a contrived example, but what happens if you try to send it several MB of JSON, well-formed or otherwise? Examining the code more closely, I notice that it only reads up to 4K at a time, and there is no message framing, despite using stream-oriented TCP. It's going to get very confused if it gets more or less than one exact JSON "message" per socket read. If you modify it so it does do framing, then the question above still holds... what happens if it runs out of memory, etc...?
All questions which are completely avoided by a simple "read 4 numbers from the socket and set positions" protocol. Code in which error cases simply cannot occur, has no need to be afraid of even a fuzzer feeding it input. KISS, YAGNI.
(The larger point I'm making is that simplicity is often surprisingly easy to apply, yet rather undervalued in these times of preferring more abstraction. Of all the easily-exploited IoT out there, I bet the majority of them would not be so if some simplicity had been applied to their design.)
Then again, TCP is a lot easier to work with, since you get an error on the client side if you made a mistake. With UDP there is no way to know if the packet was received.
Also, once you go with binary encoding, using a fixed-point integer format is almost always the better choice over sending float numbers. Depending on the needed spatial resolution (which isn't much in this example), you can reduce message size by using 8-bit or 16-bit fixed-point integers, or if you need more precision than float in the same space, a 32-bit fixed-point number.
After just a few hours of playing around I was able to set up a python script that reads people’s names out of a CSV file, launches Blender as a background process, generates a model of a little plate with text on it thanking a person by name, exports that model as an STL, calls up Slic3r (which can also be run headlessly) to generate the gcode for the printer, and then finally uploads that file to my 3D printer.
Previously I was writing people little notes by hand. Not only is this much cooler, it takes considerably less work from me. I just run a single command to execute the script and then walk over to my printer and hit “print”. A couple hours later I have a pile of little thank you cards.
I really think Blender is the crown jewel of open source software.