It switches the detection range, but not the actual power supply. You can ramp from <5 uA up to 500 mA and back all you want. I haven't noticed any glitching on the actual supply.
With the onset of cheap 16-bit, 18-bit or even 24-bit sigma delta ADCs, it isn't a very hard circuit to shove a 1 Ohm sense resistor, 2.048V reference and have sensing down to 31uA (16-bit), 8uA (18-bit) or less for 24-bit ADC.
SigmaDelta gets pricy if you want higher speeds though. But it's possible.
Often ESP32 devices at low power can still transmit, but will start to fail to receive acknowledgements.
I have a guess, but no real way to test what's happening. On the scope the start of a transmission says the supply hard, but most of the packet the ramp rate is relatively low. Once the transmission stops and the radio turns over to receive mode, the ramp rate is much faster. On a third device I can record packets and see that they are being sent and acknowledged, but often retransmitted by the ESP who didn't seem to hear the acknowledgement.
It really seems like it has to be something like that. The problem is there is no detail in the docs and no status bits in the chip. There's no way to know when the auto-cal runs.
One of the several things I did to eliminate the problem was to disable the auto-cal during a UART reception (the STM32 is the bus master so it knows when it will be receiving) and re-enable it when it is finished. It absolutely confirmed that is the glitch, but I don't think I'll ever get a true why unless an ST engineer wants to chime in!
This is a good resource, however, it didn't apply in my situation because it describes the manual calibration process, not the auto-cal (which the F0 probably doesn't even have).
I still haven't come across anything that explains in detail how the auto-cal works and precautions one needs to take when it is running. The reference manual section is something like one paragraph and can be summarized as:
"You can turn this on and it will calibrate your clock. You can also turn it off."
If I had to guess, it probably does something similar to the manual process, but just in the MCU logic. It's the lack of detail that got me: I basically ran out of things to try on the UART itself and started looking around at other parts of the chip to see what could at least be indirectly related.
My guess is that the receiver clock glitches in some way when the MSI auto calibration runs, but it never showed up on the transmitter (and the device on the other side of the connection has never had a reception issue).
I ended up disabling the auto cal feature during a UART reception and then turning it back on when the reception is done.
SPI is definitely better as far as clocking, but MCU support as a SPI receiver is sometimes a lot less convenient to deal with.
A lot of UARTs have a synchronous mode which adds a dedicated clock signal - I've used that before out to a couple MHz.
In this application though, I'm only running 1 MHz so I really didn't think I should need a separate clock (and, it turns out, still don't).
According to the documentation there is no calibration as such, the MSI clock simply runs in a phase locked loop (PLL) configuration with LSE (32.768 kHz). For example in 1MHz mode the MSI is setup to run at approximately 1Mhz, this clock then goes into a downscaler which downscales by a factor of 31 to approximately 32kHz and this is compared to the LSE clock to generate feedback for the MSI clock. When locked the MSI runs at 1015.8 kHz (32.768 * 31) so out by 1.58%.
It's also possible that the design hasn't been thoroughly tested and the PLL doesn't lock in certain conditions which could leave you with an unstable clock.
If you really need the accuracy then regularly time the LSE clock using a timer clocked from MSI and apply the best trim values as described in this app note file:///home/tom/Downloads/an4736-how-to-calibrate-stm32l4-series-microcontrollers-internal-rc-oscillator-stmicroelectronics-1.pdf
The GIL is not there to prevent data corruption on shared objects - it only protects the interpreter's internal state. The fact that sometimes you can get away with it is an accident of the GIL's implementation, not a feature anyone should rely on. It also means that you cannot rely on that behavior not to change on a successive versions of CPython.
The only safe way to do shared state between threads in CPython is locks/mutexes/message passing/etc. Even things like a simple addition to an integer are absolutely not made thread safe by the GIL.
There are TNG episodes that specifically deal with that, and the problems it causes. I think ENT had a few as well, instances which led them to the creation of the Prime Directive.
I will have to check those out since I've not seen them. I'm open-minded to it, but also pretty confident that the weight of the logic will eventually fall on the side of being good and helpful to others if we ever make it to the stars.
Schematics: https://www.nordicsemi.com/Products/Development-hardware/Pow...