This alone gave us the flexibility to expose the Python modules and objects as a simple JSON API and the DB load / save came for free.
I use Flask and I am not sure where pickling comes in. I have built a desktop application in pyqt though and the multiprocessing modules need pickle-able data.
However by using marshmallow for serializing, I found the resultant JSON output a much more manageable output format to 'reason with'. In my specific case where the lifetime of a python object could be extended over multiple sessions and pass through the runtime -> save -> db -> load -> runtime barrier, JSON was a hugely meaningful choice. The project used a graph of connected Python objects whose relations & states needed to be retained over time and memory barriers.
With a general HTTP REST API, I find that bundling all the related records of an entity in a single API call saves up the roundabout time for the client. In this case, instead of hand building a dict and then generating a json.dumps() I find using marshmallow a better choice - the declaration of the serializer itself, reflects the structure & the entity relations clearly.
https://github.com/marshmallow-code/marshmallow/issues/171 (if anyone is interested).
I just cant get anything else to work. We have a sophisticated desktop pyqt application that could really do with a better serialization with multiprocessing.
I have heard of Pathos to replace multiprocessing - but never gave it a try.